Solving the Technology Trust Crisis

“Have a strategy for what you do when you are attacked, because you will be attacked.”

Trust is the fundamental building block of human connection, whether in business, life or networked technology, such as the Internet of Things. Yet in today’s digital economy, trust is increasingly threatened. High profile headline-grabbing data breaches have made trust a topic of discussion in policy circles and while it may sound counter-intuitive, organizations must move to a zero-trust model in order to create connectivity consumers can trust.

For our hyperconnected world to reach its full potential, there must be credible trust between technology and people. In this session, I’ll detail how businesses can build that trust and emerge stronger – and safer – as a result, using a framework that consists of three pillars — security, privacy and control.

Continue the conversation with Charles on Linkedin.

Please see the full transcript of the talk below.

Alistair Croll: [00:00:00] You can tell we’re not quite as formal as some conferences. 

So Charles, when we were talking about this, you made an assertion early on that the world is a dangerous place. And I used the hashtag example to kind of tee that up. Is the world really a dangerous place? 

Charles Eagan: [00:00:21] I think there’s many sides of that question. Just read the news every day. There’s all kinds of compromises, cybersecurity breaches, identity theft. It’s not, it’s not at all rare. We’re basically surrounded by it all the time. We’re almost desensitized to the cyber security risk, the loss of privacy, and there’s way more connected devices coming.

So I like to say we’re tripping over the start line. It’s going to get a lot harder. We know a lot and we can do a lot, like I’m very optimistic, but there’s a lot of hygiene, best practices and technology [00:01:00] we need to put in place to make it safer. So I think I’m overall positive, but there are some lessons to be learned and we have some work to do, all of us.

Alistair Croll: [00:01:08] I have a friend who heads up a manufacturing company, that’s all the details I’m going to provide for reasons that will soon become apparent, and he recently paid off a ransomware demand and he had a bunch of branch offices and found himself calling the branch office manager and saying “go into the wiring closet and unplug every cable, I’ll be there in three days.” And like, this is a company that is in the supply chain of a very large industry. And they eventually, I think 10 out of the 30 systems were compromised with a ransomware attack, and he had all the right protections in place. 

I think you were mentioning a shipping company?

Charles Eagan: [00:01:42] Yeah, I’m wondering how many people here have been impacted by ransomware? 

Alistair Croll: [00:01:46] Show of hands: anybody here have been involved in a ransomware demand at work or at home? 

Charles Eagan: [00:01:50] Yeah so by numbers, you can hope it happens to someone else.

Alistair Croll: [00:01:56] Or it may have happened but you didn’t know cause someone paid it off, right? 

Charles Eagan: [00:01:58] Exactly. Exactly. Yeah so [00:02:00] there’s a great example of a shipping company. This is fairly public. I believe there was over 50,000 nodes impacted of this shipping company. This shipping company is delivering content globally. I think every 15 minutes a ship will land somewhere. Each ship containing over 20,000 containers. And all of a sudden they’re flying blind. They don’t have navigation, they don’t have tracking and they chose not to pay the ransomware. And they basically went on paper mode. There was rooms full of computers that had to be re-imaged as they tried to sort of recover. And I think one of the lessons in that one is, you need to protect, but you also need to have a plan for what to do, how you react as well. So if you’re compromised, your emergency crisis plan needs to be in place as well. So that’s just one example on the ransomware for very large scale companies. 

Alistair Croll: [00:02:56] It seems like the underlying problem here is a trust problem that [00:03:00] yeah you can say “is this use of a computer authorized” and obviously a hacker is doing something unauthorized. But a lot of that comes down to the system thought they were authorized because it mistook their identity for someone else.

Charles Eagan: [00:03:12] Yeah, exactly. Exactly. 

Alistair Croll: [00:03:14] So how do we tackle the trust problem? What does it mean to even trust an individual? 

Charles Eagan: [00:03:21] So we have a term, it’s not only our term. Distrust. You know, they trust, but verify. I think that really applies into the cyber world to keep protection. So basically you need to assume that you can’t trust anything. You can’t trust the device. You can’t trust the user with deepfakes. You can fake audio. You can fake video. You can’t trust the network because you can be blackholing packets to some adversary. 

So basically you need to sort of treat everything as if it could be compromised and build the trust up in that environment. So you need to think of trust as something that’s [00:04:00] transactional. So a username and password, if you have only username and password, that’s not transactional. If someone steals that information or gets that information, they now have unfettered access to your information. 

There was a great example recently, just look at the news for the examples. There was an automobile that was Bluetooth enabled. Great tech, love it. Someone rented a car six months ago, and they’ve been publicly saying “I can still access that car, even though I returned it. I can turn it on, turn it off. I can start the engine and stop the engine. And I don’t want to be able to do that.” And this has been going on for six months. So this is a trust window that was open, that was never closed. 

So you really need to think about trust and you need to think of it as transactional. And from a permissions point of view, you should only give access to the information you need to get access to [00:05:00] in case that information is breached. So it’s a minimal required information to do the job. 

Alistair Croll: [00:05:05] You used this term “human proofing” when we were talking, but what do you mean by “human proofing”? Just all humans are dangerous bags of meat, we should avoid them? And at some point we have to say, this person is trustworthy, right? This is security, people, it is. 

Alasdair Allan ladies and gentlemen, speaking tomorrow afternoon about how all humans are dangerous bags of meat. 

Charles Eagan: [00:05:22] Yeah, I was going to go open with “humans are the worst”, but people try to do the right thing but, you know, hey you’re working on an important document and you can’t get through your security hoops so you mail it to your Gmail account so you can work on it on the weekend. You’re trying to do the right thing. You’re trying to do your job. You shouldn’t have to go to these insecure methods to be able to do your job. 

So I think humans tend to take, you know, they like to have frictionless security. So if there’s security that you need to jump through hoops through, humans are clever. They’ll find a way around it. So [00:06:00] I think you need to make the security easy to use. 

Our vision is, we want to get rid of passwords and we want to increase security at the same time. We want to make it easier to use and make it more secure.

So I think humans are one of the weakest links in the security chain. 

Alistair Croll: [00:06:17] I will say we run a few conferences and the FWD50 website gets a considerable number of attacks. Most of them from two countries, specific countries that you can probably guess. 

Charles Eagan: [00:06:27] It’s not Canada. 

Alistair Croll: [00:06:28] It’s not Canada. We do get a lot of traffic from Canada but we don’t get a lot of site lockout warnings. 

But I think as a vector, like it never occurred to me, there’s a lot of people who trust the link when we send it to them. So we’re a good attack vector into government. So there’s this like downstream kind of, you got to think about yourself, not just your own thing, but what could someone do if they could impersonate you or something like that?

You said a current state of security is sort of passwords with maybe some SMS? 

Charles Eagan: [00:06:54] Yes. 

Alistair Croll: [00:06:55] Where do you think we are security wise? I mean, I know some of the big vendors offer some [00:07:00] kind of authentication?

Charles Eagan: [00:07:01] I think right now our security strategy is what I would call static. We have static firewalls, we have username password and we might have multifactor authentication. Some people may have password keepers, password managers, so they’re not reusing passwords across different devices, but in general it’s a pretty, pretty static and pretty fixed protection. 

And I think where we need to go is like, you know you’re you, I’m pretty sure, how do you give that information to the computer so it knows you’re you confidently? 

And so we want to look at things like your behavior. So what do you normally do when you’re on your computer? Do you open Outlook? Do you go to the web, or where do you normally travel? So if we monitor these factors without stealing personal identifiable information, and we build a model of [00:08:00] what you do, and when you do it, then if someone has your new username and password and they start to do something that isn’t typically something you would do, we would just lock them out or we would ask for another authentication or something. So the idea is we want to be able to protect, not one time. Like you should not be compromised more than a millisecond before you’re protected. So this continuous security is something that we think is really important.

Alistair Croll: [00:08:28] And you referred to this as the “zero trust architecture”. 

Charles Eagan: [00:08:31] Yeah so there’s a number of elements to zero trust. The ability to build trust in your network- this is one of the examples that could be deployed. You know the biggest thing you can do for security is user training, but that will be a continual uphill challenge.

Alistair Croll: [00:08:46] I do notice I get more and more mails that say like “This mail is from an outside source. Do not trust it” within organizations, but are you worried that people are just like enough- you stop reading the red bold text at the bottom. You know, then the IT manager adds a blink tag. And pretty soon you just don’t [00:09:00] read that? 

Charles Eagan: [00:09:00] Like who really wants to open the metadata on an email to see if it’s coming from an… Like email is not a terribly reliable protocol. I’ve been sending emails from other people to test if it can be done. 

Alistair Croll: [00:09:13] You’re the kind of guy that fishes his friends just for fun?

Charles Eagan: [00:09:17] Never for fun! No, no. 

You know, you can spoof emails. So really this is something a computer is good at: is this a legit email? Is this a legit URL? Or is there something that we should draw to the human’s attention that causes us to be suspicious? You know, those are the kinds of things to make it more human proof. 

Alistair Croll: [00:09:35] I have a theory about some of this, that once upon a time we had this tripple A stuff- authentication, authorization and accounting- when bandwidth was really expensive and your ISP was going to charge you for how many minutes you were dialed in at 1200 baud or whatever. 

And then there was this sea change moment in 1996. Because up until that time, the family had a PC. Maybe they had an AOL account, but the family had PC with an email client. And [00:10:00] then in 96, Hotmail came along and said “you as an individual can have an email account.” And as a result, this email, personal email became this log file for online lives. And not only was it then used as a form of password recovery and all these other things, but my email, if you went and looked at it through like, I don’t know, whenever I got Gmail, 97-98, is a log of my entire life. It’s every receipt for everything I’ve bought. It’s every password recovery. It’s got a bunch of stuff on travel and accounting. It is a gold mine that simply didn’t exist before we had personal web based email. 

So it seems to me like 1996, our digital life started to become stealable. And if tech got us here, how’s tech going to fix that problem?

Charles Eagan: [00:10:46] Yeah so I think that’s a great example of a step function of the information that was being collected.

Another big factor that I would add to that, [00:11:00] like the fact that Facebook or Google or Amazon are collecting lots of information about you to try to improve services. Your digital footprint in the cloud is a very accurate representation of your thinking and your activity. And this is something that, it probably knows what you’re going to do more than your brain does because it’s based on actual history versus intent. 

Alistair Croll: [00:11:23] Yeah I saw a study that if you tell me 10 songs you like, I can probably guess a lot of other things about you: race, religion, location. And then you’re in an arms race with an algorithm that’s trying to impersonate you.

Charles Eagan: [00:11:35] Yeah so I think to answer your question, the technology potential, like there’s a ton of offensive security activities going on globally, but I think the technology offense or one of the things that will help in technology, there’s lots of great technology. You know, we can make cryptography that is quantum resistant today even though quantum computers don’t exist, we know we can create encryption protocols that [00:12:00] will be quantum resistant, which is fantastic. But I think the use of large datasets, we can use that to our advantage. 

So if we use a buzzword here, like artificial intelligence machine learning on large data sets, we can start to build some things like this continuous authentication or contextual authentication, what it is you’re actually doing when you’re online, to wildly secure things more so. 

There’s some cases now where we have artificial intelligence that can detect malware before there’s a patient zero. So malware, historically, you needed a patient zero. You found a pattern. You broadcast that pattern across all the malware detection engines. And it was a, you know, it would take weeks for that pattern to get deployed and the damage is already done. There’s 50,000 computers blacked out in the shipping company. But with artificial intelligence, we can actually detect [00:13:00] malware without a patient zero. So we can say this is probably malware, so we should probably take this offline, detonate the malware, verify whether it’s a false positive or a false negative so we can actually, we jokingly say we’re predicting the future. We can detect malware before it ever exists. And we need to do a lot more of that. 

Alistair Croll: [00:13:24] I guess to pursue the medical analogy, when you grow an antibody, the antibody recognizes something inherent in the nature of the pathogen, right? And then you hear things like “antibiotic resistant staff has learned how to disguise that”. 

Is this an arms race where you’re going to have AI and then someone will make malware, then they’ll just get a copy of AI and hardened it against it? Or is it the case that there are certain intrinsic things in how malware propagates that no user would ever do that make it impossible for [00:14:00] malware not to behave that way and still be malware?

Charles Eagan: [00:14:02] Unfortunately I think this is a little bit of a escalation of the AI engines: the good versus the bad, the spy versus spy. So I do believe the AI will be used to try to make malware less detectable, and the AI will be used to detect malware. So it really applies in both camps, but I think the benefit let’s say the “good team” has, if we look across your portfolio, it could be your car, your phone, your computer, your location, your behavior, these kind of things are very hard to spoof. It’s easy to spoof one parameter or two parameters. Like two factor authentication, you can spoof both factors. But you can’t spoof your activity. 

Alistair Croll: [00:14:49] I think the example you used was, you know, you’re logging in from China, but your phone’s in Michigan. 

Charles Eagan: [00:14:54] Yeah, it’s called geoinfeasible. 

Alistair Croll: [00:14:56] Geoinfeasible! That’s a new word. 

Charles Eagan: [00:14:58] Yeah, there [00:15:00] you go. There’s so many little things that are indications that a human can use to like, if we’re speaking, I could probably determine if it’s a deepfake, even if you were on a video. There’s lots of these little bits of information we can use for our security and protection.

Alistair Croll: [00:15:19] Yeah and I’m fascinated. I mean look at where deepfakes have come. There’s an inside joke in machine learning that says AI is by definition everything machine learning can’t do yet. As soon as machine learning can do it, it ceases to be AI and it’s just machine learning. 

Which makes me wonder like, when you have these attacks, how much openness versus secrecy does there need to be on the part of the detector? Because if I open source my malware detection algorithm, I’ve essentially given the pathogen all the penicillin it needs to try and make penicillin resistant bacteria, to break that analogy beyond straining. 

Charles Eagan: [00:15:54] Yeah so I think some of the magic of the big data, like when we’re doing malware detection on files, for each [00:16:00] file we’re looking at 125 million attributes. And those 125 million attributes per file are being built into a model. Large data is very hard to sort of fake out. So the ML models that get built on that, that determine 99.99% accurate detection, are very hard to fake out. 

So I think the other thing is, attacks are usually attacking the weakest link. So why try to go through some spyware to plant it on a computer when you can just ask someone for their password. You know the weakest link is probably our homes. So it’s easy to attack a home. People plug in IOT devices, most people that work in companies also live in homes, so they go home and they access their critical information. If you don’t have protection for that, then the attack vector is kind [00:17:00] of, let’s go for the easy ones first. So education of people.

Alistair Croll: [00:17:04] Does the government play a role in that education? How much does the government need to do to protect its sort of democracy and so on, versus letting the private citizens do their own thing?

Charles Eagan: [00:17:15] Well I think regulations is a huge part of the future strategy of success. So I think there’s a big part there. The government’s also handling lots of our extremely sensitive information so you want to make sure that they’re demonstrably using best practices. 

A lot of the IOT devices that are coming online don’t have any security built into them. And these IOT devices are, like you have to know shadow IT or rogue IT. We have to make sure that people know, people are aware that as they’re bringing things online, they need to be aware of the pedigree and the security. And so the government can set standards and the Canadian Center for Cyber Security has published some secure awareness campaigns. 

Alistair Croll: [00:17:59] I’m still [00:18:00] waiting. Remember those, you know, I’m just a Bill on Capitol Hill, like Schoolhouse Rock? I’m still waiting for governments to do the Schoolhouse Rock equivalent of cyber security or something. Because I think that data exists, but it’s not front and center for most people. 

Charles Eagan: [00:18:13] Yeah. 

Alistair Croll: [00:18:13] And it’s funny, I’ve talked to people, young Instagram users who are much more aware of password security, because they know that that’s their life. And if someone steals it, they can do bad things, right? I don’t think we view ourselves as targets as much in the sort of commercial world. 

One quick last question: if you were talking to the head of IT for an organization today, and you said like, there’s three things I want you to do tomorrow, what would they be? 

Charles Eagan: [00:18:37] Yeah so probably, there’s a lot of cyber security knowledge we have today that we’re not applying. So I would say, take a hard look at your network to try to hack yourself. Like, where are your vulnerabilities?

I would say read the news. If someone else has been compromised. Like automobiles were recently hacked and thought to be a IOT vulnerable device. [00:19:00] The reason they were hacked was because someone tried. So there’s a number of attack vectors that just haven’t been attempted yet.

So I would say hack yourself, look at your network, look at what’s happening in society and ask yourself, are you going to be vulnerable for this thing that you were fortunate to not have had happen to you? 

And probably the third thing would be have a strategy for what you do when you are attacked, because you will be attacked. In fact, almost everyone in here has already been compromised with personal information. So I would say, what are you going to do to detect it, shut it down and prevent it. 

So those would be my three things. 

Alistair Croll: [00:19:41] Awesome. Good words to live by. Thank you so much. 

Charles Eagan: [00:19:44] Thank you very much. Thank you everyone.

Alistair Croll: [00:19:46] Thanks Charles.