MKG Marketing MKG Marketing Logo Quotation Marks
Podcasts > What's the Problem

Ian L. Paterson -Cybersecurity & AI with Plurilock founder Ian L. Paterson

Mike Krass • Thursday, February 29, 2024 • 19 minutes to listen

Subscribe to the Podcast or listen on...

Spotify Anchor

Join our weekly newsletter

We care about the protection of your data. Read our Privacy Policy.

Transcript

Opening

Hello, everybody and welcome to What's the problem the podcast where we dive deep into today's hot buttons, cyber and data security topics. During each episode, we're joined by expert guests, who will share insights, takeaways and experiences in the world of cyber and data security. Podcast provides a lot of valuable info strategies to get your organization moving up into the right. So join us as we explore the evolving landscape of security. This is what's the problem. I'm your host, Mike Krass. Let's get started. Today we are joined by Ian L. Patterson, Ian say hello to the viewers.

Conversation

Ian L. Patterson: Hello, Mike, great to be here. 

Mike Krass: And Ian first question for you, why are you qualified to talk about security, tell our viewers a little bit about your background.

Ian L. Patterson: Well, I actually sit at the intersection between cybersecurity and AI. And so we didn't get as much credit for that up until this year. And then now it seems to be really important. So I've spent the last 15 years solving business problems using data. For for the last chunk of time, I've been running a company called player lock, poor lock, we're actually publicly traded, we're listed on the TSX Venture, the ticker is P L you are last year, we did approximately $64 million a year in revenue. And that's up from just under half, $1,000,000.02 years prior. So we've seen a tremendous amount of growth. Most of our clients that we deal with are on the larger side, mid market, enterprise and government agencies, the US federal government is is definitely our largest customer by far, which is notable because we're actually a Canadian company and Canadian headquartered company. But we're seeing a lot in the way of conversation to do with AI, both both positive and negative. Bad guys, etc. So I'm excited to talk to you about it.

Mike Krass: Absolutely. And that's a perfect segue. I'm hearing these these stories, these murmurs right now, that AI has been a little bit of a headache when it comes to cybersecurity, you know, what are you hearing? Like, what do you see in the markets right now?

Ian L. Patterson: So it's a great question. I think if we rewind, maybe six months ago, we saw a sudden boom of interest and attention on chat GPT, specifically. And I think that I've actually seen a chart, which looks at the growth rate, how fast does it take? Does it take a product or a company to get from zero to a million users and then zero to 100 million users? And you know, chose Instagram, and Facebook and Snapchat, and these are all kind of normal looking curves. And then Chat GPT is just straight up? Like it's not even up into the right. It's just and so it was this? Yeah, it was vertical is just the monstrous growth. But then what happens is that interest seemed to die down a little bit. And what happened in parallel is that there's been this Cambrian explosion of generative AI tools. So what we're seeing in practice across pretty much every industry, every company size and every vertical, is that there's still a ton of interest in generative AI, the actual usage or adoption rate of Chat GPT, I don't know if it's held steady, or if it's gone down a little bit. I did see a report. I think it was last week or the week before indicating that, that signups had been slowing or potentially actually had gone down month over month. So I'm not sure exactly there. But I think I think that people are starting to realize that the technology itself is more useful if it's consumed some some other way, not just directly through a chatbot. But if it's integrated into some other system, that's what we're seeing a lot of from our customers.

Mike Krass: And those integrations tell me, I hear integration. And I think of security. So how do these integrations play play a role in in the world of security now that those are opening up? Because, you know, all the viewers are probably aware, like you just said, when chat GPT started, its you went to open eyes website, I'd be accessed it through a browser and you stayed in that browser. And now just the other day, I logged into one of our sales tech platforms, and they've got a generative AI model sitting there to help us write copy and subject lines and this and that the other thing based on a large language model, so it seems like the integrations now become potentially a security, vulnerability, maybe, maybe not like, what do you see?

Ian L. Patterson: Well, I think there's probably two different ways we can tackle this problem. I think, generally speaking, I am seeing the same thing as yourself. I'm seeing a lot of integrated tools that have generative AI capabilities so a good example would be something like Grammarly Grammarly you know, browser extension allows you to it's like a super super duper spellcheck, it's probably not the the technical trademark for them. But super duper spellcheck, and they recently added an integration with open AI, which I think is called Grammarly. Go for Grammarly up something like that. And so now it allows users to get the benefit of open AI inside the Grammarly tool. But you asked a question about security. So as a security practitioner, I'm looking at this and saying, Okay, great. I may or may not have trusted chat GPT. And we can get into maybe some reasons why you shouldn't. But now I have to not only look at at open AI, in chat GPT. But we also have to look at these other applications that maybe have it under the hood. And right, and so those are harder to identify. And they're also harder to apply any controls to. So as a as a CISO. As a security practitioner, how do I make sure that users are not accidentally disclosing data that's of a confidential nature into a generative AI tool, where the license in the terms of service grants the tool license to be able to train on it? Right? That's those are questions that we're asking right now. And there's not great solutions to those problems.

Mike Krass: Which if I recall, in the past 30 days, 45 days or so we're recording this here in summer 2023, the founder of chat GPT, or I should say, open AI. He probably gets that a lot. He was just run in front of a Senate subcommittee here in Washington, DC, and asked about training, right. But that was the thing that we're talking about a lot is what is being used as training data and what is not being used. So in hearing what you're talking about, the integration pipeline is exciting, because it means that you can use some of these generative AI tools and models in other applications are already using right makes it easier to flip between the two simple. But it also limits controls. So it seems like that's that understanding that controls what it can and cannot train on falls into probably those terms of service that a lot of people kind of just go accept and walk away from Would you agree with a statement like that?

Ian L. Patterson: I think that's definitely the case. The we separate it into two a couple of different categories. And so the first category of of usage is governed by whatever your internal employee handbook says. So do you have a policy inside the company which says you are or are not allowed to use chat GPT under forms of generative AI? Now what I will say and I've spent the last couple of weeks, doing customer interviews with with our customers. For the most part, most organizations, the majority have policies in place, which govern the use of large language models, including chat GPT. And what most of those policies say, is either straight up, you're not allowed to use them. Or you're allowed to use them. But make sure you don't put any confidential data in there. So confidential data could be customer data, PII, personal health information, social security number, if you're in the finance, sector, source code, which can be like alpha code that you use to generate trading models. Obviously, if you're in the US and you're in healthcare, HIPAA is a major concern, as far as I know, chat GPT is not HIPAA compliant. So you really don't want to be using HIPAA, or you don't want to be inputting HIPAA controlled data into chat GPT. Otherwise, that's a violation. Those Those AI governance policies are saying, try not to put in confidential data. But the challenge is as a policy, that makes sense. The challenge is, if you're an end user, even if you don't maliciously do that, how do you make sure that you don't accidentally do that. So I'll give you a perfect example. I, I like to think of myself as a power user, which means I use a lot of like old tabs and Ctrl C and Ctrl V. And I do it so quickly that my behavior is set up, or I'll just like Ctrl, copy Ctrl, C, copy something, Ctrl V and hit enter. And the Ctrl V and hit enter is the thing that I do automatically. And I think you know where this is going, I don't always have the thing on the clipboard. But I think that I do. And so I ended up pasting something I shouldn't in Slack, I end up pasting something I shouldn't tend to chat GPT. And if you do that, and you hit enter, and you're on the consumer version of chat GPT, well, that data is out there, like you've now disclosed something that you shouldn't and there's not good ways of pulling it back. So it is a big problem. It's a large conversation that we're having, as it's topical, too. I mean, we we shipped something last week. It's an early access program capability called prompt guard. which is actually specifically designed to, to provide some guardrails around AI usage for businesses that have confidential data. The reality though, Mike, and what we're seeing in practice is that basically every company has confidential data of one sort or another, even if the only part that you have is HR data. Right? If you have customers, or if you have employees, you probably have data you need to protect, and you have a regulatory obligation to protect. And and businesses don't see a good way of doing that. And so that's why we've entered the market.

Mike Krass: You mentioned, you know, the two policies at the beginning with that thought of just don't use these or use them. But be careful. It's kind of how I would summarize them. I've got some friends who worked for the US Federal Government, the US federal government is on the first policy, if you're on a US federal government device, like a tablet, computer, a phone, whatever, do not use Period, end of story, is it? You know, you're in Canada? Is it the same in Canada? Or what kind of controls are they talking about up there?

Ian L. Patterson: Yeah, sure. But we saw this with the iPhone, right? Like, if you think about what happened when the iPhone and smartphones came out? What did businesses do businesses said, Oh, my goodness, this is a security nightmare. You are not allowed to use your smartphone for work. And what happened in practice? Everybody did. Right? It was too convenient. It was too useful. And as a result, corporate data ended up on smartphones. Now what did the industry have to do? Well, the industry very quickly had to come up with some mobile device management software, some, you know, encrypt by default policy settings, but it was in response to the fact that just telling people not to do it didn't actually work. I see AI is the exact same thing on a number of fronts, by the way, like, I think the parallels are a bit uncanny the adoption rate is so is is so large, just like smartphones were, it's creating an entirely new category of software. So this idea that it's not just open AI, but it's actually open AI integrated plus something else, kind of sounds like the iPhone and the App Store in terms of like, Hey, this is an enabling technology, that's now going to create other value add as people build on top of it, right? And you've also got the security nightmare of hey, there's a new, there's a new area where corporate data that's regulated is going to show them it's just the reality. And so as a business owner, or as a CiSO, how are you going to first be aware that that's happening? And then second, how do you enforce whatever policies that you that you feel you want to or need to put in place?

Mike Krass: Like, I'm going to repeat that back to you. It's, as I see, so you're really talking about awareness and then enforcement? Right? It's, it's an it's a linear thing, I can't enforce what I'm not aware of. Right. So I'm not aware of it's the same thing like he talked about where, you know, in 2007, or 2006 2005, when the US federal government said, no, no, like, employees, please don't use smartphones. And then the US federal government became the biggest consumer, especially in Washington, DC, physically, a BlackBerry devices, right, which were some of the earliest smartphones. So that's an interesting thing that just wanted to repeat back to you gotta be aware of it, you got to be able to enforce it. Now, I think the last question I want to dive into with the viewers today is talking about just the markets, you know, it's 2023, it's been a bit of a topsy turvy market, not just here in the States, but in other places as well. What are you seeing in the world of cybersecurity, in terms of the capital markets is, is it as tough as everyone's saying it is out there to raise capital, either in the private or public markets? You know, what, what kind of inside baseball can you share with us without getting anybody in trouble?

Ian L. Patterson: Well, I think that we've seen a lot of change. I think it's been choppy would be my overall comments. If you think back two years ago, or even last year in the first part of 2022, certainly 2021 and 2020. We saw markets going to all time highs, we saw a whole bunch of cybersecurity companies go public either via traditional IPO or through Spax. Some of them ran up to multibillion dollars or or at least plus plus a billion dollars of market cap. Now what we're seeing just as a sort of overall general statement is the opposite. Companies are getting taken private at a pretty fast rate. So I think Thoma Bravo, in particular has just been taking companies private left, right and center. You know, paying identity, I think was a total Bravo deal made it forensics up in Canada, they got taken private for over a billion dollars by tomo absolute software and other Canadian publicly traded company also taken private and absolute software had been on the TSX exchange for forever, it seems like so there's definitely a shift from going public to instead of going private, so that's the first one and the second one. Certainly The access to capital is decreased. I serve on the the advisory council to the TSX Venture Exchange, which is where we're listed. And so we get, we get monthly volume metrics. And certainly the capital raised and these are public numbers like the capital raised is less than it has been in the past just as an absolute aggregate number. Deals are getting done. I would say it's like in terms of financings, I'm also seeing a lot more m&a. And also, we're, we're acquisition oriented. So we've bought four companies. And so as a result, we now have your reputation for buying companies. And we have a lot of companies coming to us looking looking for solutions or looking for exits. And I would say that the we're seeing a lot more venture backed software companies in the cybersecurity space who either thought they could raise a Series A or Series B and not have the metrics or thought that they would be cashflow positive by now and aren't. And so we're seeing we're seeing a lot more of that. You know, I think, I don't think building a business is ever easy. So you get to say that now is hard, is like, Sure, of course now was hard. But it was hard before it was just hard for different reasons. But I think there's also, that also means that there's opportunity, it just depends on what side of the table you're on. You know, I've certainly been I've certainly had terms proposed to me over the course of my life, that could be described as horrific. In terms of the deals that were being offered to me like, sure I'll invest in you on these terms, and they would be considered horrific now. But if you're on the other side of the table, that's a great deal. Right? And so just depends on your perspective. So is it hard right now? Yes. But I also think that being hard means that there's opportunity, it just depends if you're a buyer or a seller. And so for us as a buyer, I think that there could be great opportunities for us as a seller. You know, our market cap is a fraction of what our revenue is, or revenue we did 64 million a year. Last year, our market cap is significantly less than that. And to me, that seems like I'm scratching my head a little bit on what the disconnect is. So just depends on what side of the table you're on, I think. 

Mike Krass: Yeah. Well, and thank you and to our viewers and our listeners, that is a wrap on this episode of What's the problem. Hope you found our conversation with Ian L. Patterson insightful and informative. Also wanted to give a quick shout out to our host MKG Marketing, MKG is focused on helping cybersecurity businesses get bound get leads close deals, so your cybersecurity business is struggling to do any or all of those things. Let us help you to learn more, you can visit our website at mkgmarketinginc.com. Thanks for listening and don't forget to subscribe and leave a rating Ian told me he only likes six star ratings. So we're not going to let the end down now right getting the six star ratings. Until next time, my friends and say goodbye to our viewers and listeners.

Ian L. Patterson: Mike, I really appreciate the opportunity. If folks are looking to connect Ian@Plurilock.com We're more than happy to share the results of the surveys that we've done around generative AI usage. We've also got a couple of free resources including AI governance templates and examples. So if you're thinking about hey, how do we how do we put the right policy in place with AI usage? We've got some free resources there and we're happy to help support help support the community however we can so please don't hesitate to reach out. Thanks, Mike.

Mike Krass: That's awesome. And like Ian said in email him we're also going to offline put those in the show notes so you can talk to him directly or you can access those on their website. Thanks

Ian L. Paterson

Ian L. Paterson is a data entrepreneur with more than 15 years of experience in leading and commercializing technology companies focused on data analytics and machine learning. Ian has raised over $20M in private and public financing, completed 4 international mergers and acquisitions across North America and Asia, and is co-inventor of 3 patents. As CEO of Plurilock, Ian successfully built and grew Plurilock, leading to its successful listing on the TSX Venture Exchange.

Join our weekly newsletter

Get industry news, articles, and tips-and-tricks straight from our experts.

We care about the protection of your data. Read our Privacy Policy.