Security in the Cloud - An Interview with Ratinder Ahuja, CEO of ShieldX
Interview with Ratinder Ahuja, CEO of ShieldX:
Cyber Security Dispatch: Season 02, Episode 03
Show Notes:
On today’s episode we welcome Ratinder Ahuja, the CEO and co-founder of ShieldX. With many years in the cyber security profession, notably working at McAfee before starting his current company, Ratinder has great experience and perspective on the field. In our discussion he explains the beginnings of ShieldX and the reasons that founding the company was necessary. Our guest gives us a great explanation of the terms ‘horizontal’ and ‘east-west’ security and the central role of these ideas in his business. We chat about the migration of on- premise systems to cloud services as well as the compatibility that ShieldX shares with the major web services. We also cover common usages of the company’s security and hear from Ratinder about why the new ways in which ShieldX operates surpasses old, agent based approaches. We finish of the conversation by recapping the three-dimensional approach to security that Ratinder and the company employ and how this might evolve in the near future. Tune in to hear it all!
Key Points From This Episode:
- The beginnings of ShieldX and the time leading up to this.
- The arrival of the cloud and the effect of ‘east-west’ security.
- Implications for the lack of orchestration for traditional systems.
- Reducing the total cost of ownership in addressing these scenarios.
- Transferring the security of on-premise systems to the larger, cloud scale.
- The logistics of migrating your security to any of the large cloud services.
- The futility of an agent based approach to cloud security.
- Compatibility and the platforms with which ShieldX corresponds.
- Customer experience and how the service has been most widely utilized.
- The three dimensional problem that ShieldX solves and secures.
- Some information on ShieldX’s investors.
- And much more!
Links Mentioned in Today’s Episode:
ShieldX — https://www.shieldx.com/
Ratinder Ahuja on LinkedIn — https://www.linkedin.com/in/ratinderahuja
McAfee — https://www.mcafee.com/en-us/index.html
Reconnex — https://www.bloomberg.com/profiles/companies/2925257Z:US-reconnex-corp
Gartner MQ — https://pages.alteryx.com/analyst-report-2018-gartner-mq-data-science-machine-learning.html
Equifax — https://www.equifax.com/personal/
AWS — https://aws.amazon.com/
Microsoft Azure — https://azure.microsoft.com/en-us/
GCP — https://cloud.google.com/
Docker — https://www.docker.com/
Kubernetes — https://kubernetes.io/
Alaska Air — https://www.alaskaair.com/
Bain Capital — https://www.baincapital.com/
VMware — https://www.vmware.com/
Cisco ACI — https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html
Aspect Ventures — http://aspectventures.com/
FireEye — https://www.fireeye.com/
Symantec — https://www.symantec.com/
Dimension Data — https://www2.dimensiondata.com/
Introduction:
Welcome to another edition of Cyber Security Dispatch. This is your host, Andy Anderson. In this episode, Security in the Cloud, we talk with Ratinder Ahuja, CEO of ShieldX. We cover the challenge of securing systems and networks in an area where core functions increasingly housed in cloud environment, outside the corporate networks and how the historic tools used to secure those systems just don’t fit anymore.
It’s a challenge I know many folks are struggling with, and I think Ratinder’s perspective is certainly one our users will enjoy hearing.
TRANSCRIPT
[0:00:40.6] Andy Anderson: Why don’t we start there, how your background, how did you go to starting out a technology security startup.
[0:00:46.6] Ratinder Ahuja: That’s great. Yeah, so let me tell you the genesis behind ShieldX. We started ShieldX towards the end of 2015 - January 2016. Prior to starting ShieldX, I was CTO for McAfee’s network security portfolio. We had a broad range of network security products: the classical next-gen advanced firewalls, intrusion threat prevention, data loss prevention, security gateways - the classical systems. I got into McAfee, because I was founder of a company called Reconnex doing the data loss prevention products -a consistent Gartner MQ winner - and they acquired that company in 2009 timeframe. That's how I ended up at McAfee.
During my years at McAfee, the cloud revolution started and many of the customers that we would talk to for adopting a multi-cloud architecture, meaning their on-premise data centers were becoming more orchestrated, and at the same time they're using one or more public cloud footprints. The whole idea was that they were treating compute storage and networking is core, harnessing rigidity comes to the cloud and delivering value at a faster pace.
During this transition, many of the customers started questioning saying, “How does security fit into this new world?” The traditional vendors would answer saying, “Oh, we have big appliances and we have some small appliances and we have virtual appliances of every kind. What's the problem?” There are three key problems that came along.
The first one was there was heightened awareness of the fact that there is something called east-west access, the in the insides of the multi-cloud. This awareness came around, because traditionally data centers were walled gardens. You had a very strong north-south boundary control, but you didn't have much visibility inside those environments. As more events like Equifax and various things happen where something would come in one way or the other and laterally move towards a high-value target and then cause some data loss. That awareness was rising and it was being given a term called east-west security.
Then when the data centers connected into public clouds, you would have a very common configuration where the public cloud footprint would connect back into the data center using direct connect, or VPNs. The public cloud footprint would also have an internet facing portal. All of a sudden, you would now have this attack surface, which is now grown tremendously; you not only have to protect the things that you protect on the on-premise environment, but now you to worry about them going into the cloud, elasticity, on-demand computing, all those things you made the attack surface really large.
To summarize the first issue, there was heightened awareness of east-west security and the customers found that it was very difficult, almost impossible to take that lateral traffic and somehow spin that out to physical controls on premise, or try to sprinkle virtual appliances throughout the lateral axis. That was problematic, because these appliances and chassis do not understand the whole idea of the virtual network orchestration and an automation across this network - that was the first problem. To summarize it, the heightened awareness east-west security, and the total failure of existing products and solutions to solve that adequately. Does that first problem make sense?
[0:03:48.3] AA: Yeah. No, definitely. Just to put a bow on it, like east-west and lateral you consider to be synonyms, or they – or am I missing subtle difference?
[0:03:58.6] RA: You’re right. It's typically called lateral movement - its the term used to say how things propagate towards a high-value target. East-west security is a mechanism to say let me secure that path.
[0:04:09.4] AA: Yeah, and the appliances were really struggling largely because of the trouble of: you're really using IP addresses and locate servers here for whatever your infrastructure looks like in an on-prem world. Then once you go to a cloud world where you're just spinning up different machines and whatnot, it's trying to get hold of sand in your hand. Just like, it's changing constantly and whatnot, or what's the challenge of –
[0:04:41.6] RA: Exactly, exactly. You just have hit upon the second problem that I was going to eliminate. The second problem was this whole idea of lack of orchestration for these traditional systems. What that meant is exactly what you were just saying that the DevOps teams and the application teams simply now can bring up new applications and scale things up faster than the security team can say ‘no’, or ‘yes’, or ‘what are you guys doing?’
In the physical world, there was a well-defined process by which the security team and the infrastructure team and the application team got together, planned out what they were going to do for the next five years, put in a over-engineered solution to intersect those traffic flows and planned it out, right? But in the new world, the DevOps team simply treat the infrastructure as code and you simply roll out applications and make them scale up and down on demand.
Now the security team wanted a system that would be equally agile and orchestrated, that would understand the intent of the security. Security team says, “If a web tier, or middleware apps, or storage tiers show up, I want to protect them with a certain profile, with a certain threat protection and the appropriate access controls and watch for information loss if it's going to happen,” right? This is their intention. Because they're not diverse from the infrastructure team and the apps team, they wanted a system that would take this intention and transform that into reality, and that's what we build.
This is the second point about having something, which is automated and orchestrated. This leads to the third point which is reduction in TCO, because the teams aren't getting any bigger and they are tasked with protecting even a vaster, broader attack surface, so the solution had to have TCO, which was appropriate. Those were three key problems. Namely: east-west protection, a solution that is highly orchestrated and automated that can transform security intention into reality, and offer all that reduced TCO.
[0:06:38.3] AA: TCO for those aren’t familiar with the acronym.
[0:06:40.3] RA: Total Cost of Ownership, which is not just the cost of licenses and software, but the cost of operating such systems.
[0:06:47.8] AA: Yeah. I mean, I'd like to span the range. The audience tends to be very sophisticated, but also often less sophisticated, so I like to make sure that it's really grounded. It sounds like old model, you're like, the security team provides the sandbox, right? You can play in the sandbox, do whatever you want, but we've walked it down now, sort of like the dev teams can go to the beach and do whatever they want, right? How do you create a sandbox that you can – that moves with them on the beach?
Let’s talk about how you actually do that. How do you bring the individual controls typical security controls that you would expect in more of a classic on-prem environment to the cloud world.
[0:07:32.6] RA: Yeah, excellent. This is where ShieldX was born. Recently, there's something innovative that needs to be done to address some of these problems that are emerging, because of multi-cloud adoption. We set out to solve three things. The first one was how do you build a solution that is of cloud scale? - that's no longer constrained by a CPU memory geometry of an appliance, or a box that has traditionally been built. As a result, you were always constrained by what you could do and you created lots of problems for customers, because the customer would have to somehow replicate and scale these solutions and do traffic engineering and load balancing and all kinds of things to get scale.
That was the first thing to solve. We looked at our cloud principles at effective architectures, and what the cloud brought was a new way of thinking around horizontal scaling at the pain point. What that means is in the old world you had monolithic entities and they were either on hardware - and it replicated that thing over and over or the monolithic piece of software was replicated over and over - just to address a single bottleneck in that software stack, right?
The cloud world says, “Hey, we don't spin up massive systems. We horizontally scale out where the bottleneck is.” To achieve that, there's an architectural concept called microservices. What that does is it says, break your system up into its elemental building blocks. In security processing, there's typically three fundamental building blocks, namely traffic flow processing, encryption and decryption, and something called deep packet inspection, which is the ability to look deeper inside flows and do some security outcomes.
In a traditional virtualized system, any of those stages becomes a bottleneck, then the entire system suffers as a result of the bottleneck. What we said was, “Hey, if we could horizontally scale out a bottleneck as it appears, then I would have a very efficient system.” We literally turned some of these elemental building blocks into containerized microservices. Now we have a containerized microservice that does flow processing, another one that does encryption and decryption, another one that does deep inspection. There's about 30 of them, but for simplicity we'll say there's these elemental building blocks.
Now we horizontally scale them up and down. The value now is they're tiny, they're a fraction of the size of a big monolithic security stack, and they can be efficiently scaled at the pain point on demand. Now we do all of that. We manage the orchestration of these microservices, we manage its insertion into the network, we manage its scale up and scale down, so the system – customer sees a single system that is inherently elastically scaling - it is as if somebody was magically bringing in new line cards and expanding the shelves of the chassis is to accommodate traffic growth, and then taking it away when the need is gone, right? Something which is incredibly efficient and elastic. This approach broke that CPU memory geometry bottleneck problem that has plagued the industry, the network security industry forever.
[0:10:25.7] AA: Got you. Let's say, just to bring it to specifics or let's say we're in one of these major public cloud providers, and I won't make you name one if you don't want to, but like AWS or Azure, or one of the others. Now I've got a dev environment there. Essentially, I'm going to install the system in my Amazon infrastructure and it's going to essentially start spinning up additional virtual machine servers etc., as needed around my environment, or how does that work?
[0:10:59.1] RA: Excellent. Perfect. This then brings us to how this system actually insert itself, how does automation orchestration happen. We do what you just mentioned, but we do it at a grand scale. The first thing happens is you download a first piece of software, we call it the controller. This thing comes up and it takes service credentials to your on-premise ESX environment, or your Amazon and your Azure environment, so we support those three environments of course.
From these service credentials, we start doing discovery - discovery means we are now able to go figure out what networks you have and what subnets you have and how many, using specific terminology for this cloud, how many VPCs and V-nets and various artifacts that exist in this environment. We discover all of those. We then discover what workloads are there, the nature of the workloads. This bridges the first gap where these security team says, I don't know what the dev teams are doing, so this gives them the first visibility saying, “Here's your landscape.”
Then to make sense out of this, because they will be thousands and thousands of subnets and networks and many thousands worth of workloads, what we do then is using a number of techniques including machine learning and clustering classification. We place these assets into logical groups and give them a presentation which says ‘this is what an app looks like’. Here's what an HR app looks like, these are the element – these are the building blocks of that app, this is how they're distributed amongst various infrastructures.
The security now for the first time ever, he has visibility into this infrastructure. Based on this gained visibility and they then apply their policy intention; they say: “Okay, great, the web tier should be protected against threats, because it's public-facing. If any PII (Personally Identifiable Information) or PCI goes out of that, I want to block it or tell me about it.” Then they say great, so the web tier needs to talk to the other middleware and app tiers, so let's make sure that only the right web tier can talk to the backend.
That means I need to have very specific access control policies. Threats may get through the web tier, so I want to protect the apps now. I want to have type prevention against those attack surfaces. Then they continue their intention saying: the storage elements make sure there is no unscripted PCI or PII regulatory content. Again, only certain apps can talk to certain storage elements, and the storage elements also have attack surfaces so those should be protected - this is the security tension, so they're expressing their intention across the model that we have discovered for them.
At no stage do they worry about where these elements are. The web tier could be across three different cloud - don't care, or you could move it from one cloud to the other - don't care. These storage elements could be on-premise, big databases, or may have supplementary storage elements in the cloud - don't care. We match that. We’d now transform that intention that has been applied to this model into actual insertion across these diverse set of cloud networks, and then we instantiate the appropriate microservices that will transform that policy intention into actual controls - whether they are X control, or threat prevention, or malware prevention, or data loss prevention.
Then we keep it all consistent; meaning as you scale up or scale down, your applications, or you migrate the application from one cloud to the other, that automated discovery continues and we keep transforming that intention into reality by instrumenting and orchestrating our microservices to go into those environments at the right time at the right place at the right scale.
[0:14:26.2] AA: Okay. Essentially - and that deep packet inspection, or that analysis of where the connections are happening, or the control of a PII or PCI - is that home built stuff, or was that pulling best of class from existing solutions, or is that relatively – that seems a lot for a startup to take down all at once.
[0:14:49.5] RA: Excellent question. We get this asked all the time that, “Hey, are you guys using other people's stuff? Are you building your own?” The system is built using these microservices that are highly reusable. The deep inspection engine is the one that is doing threat inspection, deep inspection, the DP engine does the URL inspection, it does the content inspection of PCI, PII, but those are essentially different content loads into a DPI microservice. What we do is we commercially source the threat definitions from partners like Symantec, or FireEye, or TELUS Labs and so forth. But the engines - these microservices that implement those definitions and tokens of interest - are our own.
The reason a startup like us can do it, because we had deep insight into how these systems were built. Me and my team have built over the last 25 years: proxies, firewalls, intrusion threat prevention, data loss prevention, so all the things that you see on the market we have built over the years and we created all these problems. The secrets that we learned is: if built the elemental building blocks correctly, then you can - using this microservices architecture - you can get different personalities to give you different security outcomes. You have this massive leverage and consolidation into a single pane of glass.
[0:15:59.5] AA: I think that yeah, that's a great answer. I mean, I think infrastructure is new, your own, but where the signatures and patterns are coming from that's more standard, which is I would expect, right? The guys who have been watching stuff, or and are seeing the volume, the traffic, or probably at the best source for that.
How about containerized – I mean, you guys have containerized the services, which I think is great. How about thinking about securing infrastructure that's containerized by a potential development team or user?
[0:16:34.5] RA: Excellent. We saw this evolution happening - from physical environments, to virtualized environments, to cloud footprints of today Amazon, AWS, and so forth towards - moving towards Kubernetes and Docker and on those kind of environments, right? We wanted to build a solution that would – there's a lot of startups that we point solution; so you got a point solution for AWS, another point solution for Kubernetes and other Docker containers. We wanted to make this thing scale and be available across this broad spectrum, because any large enterprise customer will have all those things.
Like I said, our microservices are actually built using Docker containers. What happens is today we wrap that into the native virtualization architecture. In VMware, it looks like a VM, and Amazon looks an AMI. As we move towards a Docker native solution, we shed that outer skin and these various microservices and controls are actual Docker containers. Then we intersect the Docker container networking layer - which is an OB switch, or a Docker bridge, or something of that nature on a service mesh - and that's where we intersect the traffic and then we use our microservices, containerized microservices to give you the exact same threat detection and policy behavior as you would get on your physical or virtualized environment. Consistent security controls, consistent ways to express your policy intention across this diverse spectrum.
[0:17:49.7] AA: Got you. I mean, one of the things that I hear a lot - I'd love your opinion. I know it may not be core to your solution, but one of the challenges about containers is the shared kernel of the architecture. How do you get comfortable around that sort of potential threat vector of: once you're sharing a kernel now, the potential to get root access to that kernel, etc.
[0:18:16.3] RA: Absolutely. We are our networking solutions, so we add value once the traffic goes in and out of a container. We believe that the container environments themselves will harden themselves against route escalation and hacks that break the container boundary. Those we believe will be natural evolutions of the container architectures. Just like in Windows where if we go from Windows 3.1 to Windows 10, Windows 10 is highly isolated to the point where even an agent isn’t effective; iOS and Android, the agent can add no value because they've taken that isolation to either do sophistication, so you get that, make sure that everything breaks that boundary.
We believe containers, Docker containers and these architectures will offer that better and better over time. An agent-based solution will not add value. Networking will continue to add value, because it's still inbound attacks and outbound content still what you want to look for.
[0:19:06.4] AA: How about when you deploy yourself, are you essentially – you're spinning up a separate virtual infrastructure inside their environment, or are you – I mean, even though eventually that hardening will happen - and I understand that’s outside your purview - but in this current environment where there may be still concern about that, how do you separate your own architecture from whatever the typical DevOps infrastructure is?
[0:19:32.6] RA: Our microservices and our containers come from within our – so the control of every ship out, so that in its belly, if you can think it has these containers in it and that's what we spin out. These are built using our own fairly well-defined secure software development lifecycle, where it's gone through hardware and intrusion testing and so forth, so that’s hard-earned.
It does not co-mingle code with the customer’s DevOp process. We don’t impact the DevOp process. We don't expect them to put any of our code inside there if you’re not an agent, so that's why we don't impact them and they don't impact us.
[0:20:07.6] AA: Is it sitting in a separate server, like virtual machine server or it’s sharing the same services?
[0:20:13.8] RA: No. It appears as a virtual entity. Depending on, let's take an example of ESX, which is a fairly simple virtualization architecture to understand. Over microservices are wrapped up to look like a VM and they get spun up through V-Center on a set of resources that it give us - and that's what we look like.
[0:20:30.8] AA: Okay. So far where you – Azure, AWS, where else are you able to defend like VMware? Where else are you going to deploy?
[0:20:39.2] RA: Yeah, so these three are our current shipping environments. Then we're just waiting for the world to become more Docker - or container ready and then we will just shed the outer skin and Kubernetes/Docker would be the next logical place to land. GCP is very close to that, even Google Platform is very close to that, Red Hat, Redshift and various other environments are pretty much that. I think that's where the world is headed, so we'll be right there.
[0:21:04.5] AA: Yeah. This has been great. I mean, really interesting to dive into the technology. Where are you seeing adoption from different customers? Where are they finding this most interesting?
[0:21:16.5] RA: Yeah. Our customers that have a heightened awareness of east-west security, and they have a multi-cloud environment - that's even better, because then they are facing all the problems we talked about - they have experienced those firsthand. If you look at our website, for example we cite Alaska Airlines as one of our customers, as you can imagine a fairly large organization, very security-conscious because they are running critical infrastructure, at the same time they're handling PII and PCI. We work with them, January 16, 2016 is when I made the first presentation to them.
We had an incredible philosophical alignment of the level way of looking at the problem and them having firsthand experience trying to operationalize traditional controls in these multi-cloud environment. Excellent philosophical alignment, they helped us, they were our alpha customer, a beta customer, eventually becoming a paid customer and continuing with the expansion. That's a classical example.
Others that we cite on a website are some financial services type companies, some state government type companies. The common theme is I am now aware of this thing called lateral movement or east/west and security problems; the problem is getting worse, because of a multi-cloud footprint. I need a solution that is – can work consistently across multiple clouds and understands the fact that I am now divorced from the infrastructure, so I just need to express my intention and have the system turn it into reality and make sure that thing runs with automation or orchestration, so TCO, my cost of ownership is lower, right? That's out of the team.
[0:22:39.2] AA: Essentially when you plug into that – so let's say you plug into a company that's doing some stuff in a couple of clouds. What are the pieces that you’re replacing? What are they going to drop out of their security step?
[0:22:56.3] RA: Yeah. Typically, they would be trying to – either they're going there with no security, or they’re going there and using the basic primitive ACL type controls of the cloud environments, or they're trying to retrofit their existing physical or virtual chassis, or virtual classes into those environments. That's typically the spectrum. Once we are able to help them understand the ease of use of what we have, the automation orchestration that we have and the richness of the controls, the ability to do micro-segmentation across multiple clouds and from one plane of glass, so that's what helps them bring towards us, come towards us.
[0:23:34.5] AA: Very cool. How about on-prem? Once they’re also grabbing it in the cloudy, are they trying to pull it back into the on-prem, or is that non-server you get?
[0:23:43.3] RA: No, no. The ESX datacenter on premise, which is 99% of enterprises have that. That is a prime spot for us. Many cases we’ll start there, because they want to micro-segment that datacenter. Micro-segmentation is a concept where you try to create isolation in more typically flat environment. The flat environments are prone with risk for lateral movement, so the first step is invisibility to create some segmentation, isolation and then insert some controls in there. That is our sweet spot, because doing that on ESX, or NSX, or with ACI is very difficult, because you’re trying to put controls, physical chassis, or virtual chassis throughout your environment.
We definitely solve that, we come in through that vector and then expand it to the multi-cloud, or we'll come in from the other side Azure, or AWS and then come in to the ESX side, so definitely this little golden triangle that we have works really well for us.
[0:24:34.4] AA: In terms of that segmentation, you do that on the axis of specific individuals, or different machines, or different applications. Walk us through a little and how you do that segmentation, because I know there's a lot of different ways to think about it.
[0:24:50.2] RA: Absolutely, absolutely. Yes, so if you look at a traditional data center and just to keep it simple, let's say they'll have a typical three-tier app as a front-end middleware and a storage back-end, right? Where there is database is a storage. Typically, they'll have multiple instances of this app. Meaning, they'll have a production version, a test version and a dev version, right?
Now in the worst case, we find that all of this is on a big flat DB switch, right? A completely flat thing, arbitrarily brought up. You see this quite often. Now this is rightful lateral movement. You can have lateral movement between the web tier, you can have lateral movement between one web tier to the dev back-end, because that wasn't secured as much, right? The first question customers ask is, “Okay, do I have this problem or not?” That's visibility and discovery.
The second problem – the next step they'll say is, “Okay, make sure all my assets are isolated,” which is a definition of the micro-segmentation aspect. This is where they go in and we will say, “Okay, great. Let's start tier by tier and within the tier layer by layer.” We would then say, “Great, discover all the web servers wtih frontends. If they were on the same V-switch, we would then go manipulate that V-switch definition - through V-center of course - and then break that apart into smaller what is called ‘port groups’. Then insert ourselves at those port group boundaries.
All of a sudden, without you having to do anything configuration wise, we've automated that entire concept down to a mouse click. Above that we will say ‘the rule is now in place’, which says if similar infrastructure elements show up ever and they are not following the rule, go ahead and micro-segment them again, and so the content discovery makes sure that that it is kept consistent as changes happen in your environment. This then is expanded to all the other tiers and multiple instance of the tier, so this works basically with discovery and couple of mouse clicks.
[0:26:45.6] AA: Yeah. That's great. As more people are using it, you're just getting better and better in terms of how you do that different applications in different pieces.
[0:26:54.2] RA: Yeah, exactly. We have machine learning capabilities that help us both cluster and classified traffic flows, so we can say, “Ah, this is what Oracle looks like,” and we keep refining that over time, and this is what JBoss middleware looks like, because that's the type of traffic it emanates. Yes, so if you get to show you a demo, we'd love to show you this machine learning capabilities, which starts from basic traffic analysis and looking at various other artifacts of workloads and saying, “Yeah. See here's a three-tier app. Would you like me to go ahead and just recommend?” - this actually is really interesting, because now we actually recommend what the access control policies and the threat prevention policy should be, because now we have an understanding of who talks to who, what methods they talk to each, other and what's the native attack surface.
[0:27:36.6] AA: I mean, this is very cool. I’ve not encountered anybody who doing things the way you're doing. How do you see the landscape? Who else is out there if someone were thinking about this space, how are they – if they weren't calling you, are there other people that they could call, or what else is out there?
[0:27:57.9] RA: Absolutely. Growing, we should share the visual that we have, instead of dissect the market. This is all before she existed. We saw looked at it and I'm going to try to visually put a picture in front of you. On the bottom axis, I have something called cloud readiness of the solution. That means how automated and orchestrated this is. On the vertical axis, I'll put something called security richness. That means how rich and complete other security controls.
If you look at the traditional appliance vendors and we were one of those in our previous lives, they are very rich in security capabilities, have been doing it for the 20 plus years, but they're the anti-cloud. You can’t take the box into the cloud, so there's a big impedance mismatch in operationalizing those in a multi-cloud environment. Then I go further down on my x-axis and then you find native controls from AWS and Azure and NSX and ACI, which have basic ACLs. Again, primitive on the security capabilities but somewhat automated because they’re part of the infrastructure.
Then I go further along the x-axis and you find names like Illumio, or V-armor, Cloudpassage. They were born before us and they built something that said, “Hey, I had understand things need to be automated and orchestrated.” However, if you look at the entire problem, you try to protect everything from the network, the host, the operating system, the application stack, the user, the data that user produces, so rich attack surface - you do solve this today for your on-premise, your campus boundaries things you implement all those controls we talked about data loss management, security gateway and so forth - now to do that on a multi-cloud bases on the east-west axis scaling from zero to terabytes of inspection, the velocity is the challenge. The new entrants said, “Oh, this is too hard. We'll just do basic ACLs again.” If you look at Illumio, they do basic ACLs using a host agent. V armor does basic ACLs using a network-based approach, but for ESX only. Cloudpassage does basic ACLs and something called log-based threat detection, so Defer Time Threat Detection. This goes on in StackRox log-based detection, Twistlock more log-based detection.
They have not done true security, but just basically given you visibility. We solve this and look at this. This is a problem. We want to solve this in three dimensions; the richness of the security capabilities to fully address east-west and lateral movement with full DPI: giving you threat prevention, intrusion detection, malware prevention data, loss prevention, looking for URLs going in and out - a full scale of capabilities, and understanding how kill chains progress in a multi-cloud environment, and building algorithms to detect the kill chain and prevent the kill chain, right? That's the full richness as we go to controls. Running across multiple clouds with full automation orchestration, which then dramatically reduces the cost of operating this thing. We’ll send you this visual, so you can look at and say, “Yes, this makes sense.”
[0:30:44.6] AA: Yeah, that would be great. Now, this has been really interesting. Thank you for taking the time and I feel like we hit on a lot of different modals, but in a way that I think even people who are not experts in the space can still grasp it. I thank you for that clarity. I don't always get that.
[0:31:01.4] RA: Great.
[0:31:02.2] AA: Just before we signoff, what else should we know?
[0:31:05.4] RA: Yeah. A couple other interesting artifacts; we are funded by not only just traditional VCs, which is we have world-class VCs with Bain Capital and Aspect Ventures, but we have an amazing set of strategic investors in the company, namely FireEye, Symantec and Dimension Data. We're unique in that way, because – and you can see the FireEye and Symantec being there, because they see this emergence of cloud security, multi-cloud security, software security as capabilities that are not there, the traditional players in the industry. You see the bench data coming in because they see their customers asking for software-defined everything, right? They want software-defined solutions for software-defined data centers and clouds and they wanted to make sure that they have something that can help their customers move towards this vision.
[0:31:49.3] AA: Yeah. No, that sounds great. I think, obviously your background, but also the investors are bringing the experience and real backing and support that this is really innovative, and I think that's great, but also to line up with [inaudible] sort of willing to throw rocket fuel at stuff.