News

Cybersecurity in 2026 – Threat Landscape, Vulnerability Management and Securing AI

If you need help with our different Risk Managed Services solutions like Cybersecurity, AI Consulting, and more, visit our page here.


Brian Jackson: Good morning, and welcome to those who have already logged on. This is, your 5-minute warning, so we’ll be getting the webinar started in about 5 minutes, so looking forward to it this morning. Thank you and good morning, everyone. My name is Brian Jackson. I’m the CEO of Abacus Technologies. I’m coming to you this morning from our trustful Alabama office. Before we dive in, just want to share a little bit about who we are. Abacus Technologies is a trusted partner for proactive technology solutions that includes business intelligence, managed services, information security, information assurance, and cybersecurity to deliver results today and can scale with your business for the future.

We’ve got more than 25 years of industry expertise and assisted numerous businesses in navigating their ever-changing technology landscape, and by doing so, we have enabled companies to overcome today’s challenges, while also guiding them toward a more promising and accessible tomorrow.

BMSS was established in 1991 and has grown to become one of the top 100 accounting and advisory firms in the U.S. We have over 300 employees across our family of companies, and we assist clients in a variety of industries and services, providing accounting, advisory, technology, payroll, PEO, and wealth solutions in an effort to bring our clients peace of mind and provide exceptional client service.

Additionally, we are an independent member of the BDO Alliance, one of the nation’s largest associations of accounting consulting firms, and through this alliance, we’re able to combine the personalized service of a local firm with the resources and reach of a nationwide network, ensuring our clients receive the very best support available.

If you’d like to learn more about who we are and how we can help, please visit us at www.bmss.com.

Before we get started in our webinar this morning, a couple of housekeeping items to mention. If you have any questions during the webinar, feel free to use the Q&A button located in the Zoom control panel of your screen. We will answer all questions at the end of the presentation. There will also be polling questions, so if you’d like to receive CPE credit for attending this webinar, please answer those as they pop up.

We’re very fortunate this morning to have Lauren Pankey and Jonathan Purz with us to talk about cybersecurity and what we can expect in the year of 2026. Before we get started, let me tell you a little bit about our speakers this morning.

Lauren Penke is an experienced IT auditor and cybersecurity professional, with over 6 years of hands-on experience leading SOC 1 and SOC 2 reporting engagements across a variety of industries, including fintech, healthcare, insurance, manufacturing, B2B services, and cryptocurrency. Her work focuses on helping organizations strengthen their control environments and meet complex compliance requirements while aligning their best practices in security and risk management. In her current role, Lauren also plays a key part in guiding organizations through the evolving landscape of cybersecurity compliance and emerging technology governance in particular, as it relates to artificial intelligence.

Lauren holds a Bachelor in Science in Business Administration with a concentration in information system management from Auburn University, and a Master of Science in Information Systems Management from the University of Alabama at Birmingham. Lauren is also a Certified Information Systems Auditor, and holds a Certificate of Cloud Security Knowledge.

Joining her today will be Jonathan Purge. Jonathan is our Manager of Information Security for Avix Technologies. In his role, he oversees our security team. He engineers security solutions for our clients and analyzes and remediates security threats. He also spearheads our security product development and implementation.

Jonathan’s career and experience with competing began in the early 80s, but then he later focused on personal IT, small business IT, and web development, spending 6 years in the United States Air Force. He worked on database development projects, in addition to his regular duties maintaining the B-1B bomber.

Jonathan holds a Bachelor of Arts degree in Computer Science with a minor in Business Administration and a Master of Science degree in Cybersecurity from the University of Alabama, Birmingham. He has returned to his alma mater as a certified credentialed instructor, where he teaches penetration testing and vulnerability assessments.

As a successful small business owner, he uses those skills and experience to help develop and enhance Abacus technologies, rapidly growing security practice. Outside of work, Jonathan is passionate about his ministry, serving as a preacher at a local church here in Trussell, and a chaplain for the city of Trussell. He also enjoys martial arts, he holds two black belts, softball, pickleball, and lives in Moody, Alabama. So welcome this morning, Jonathan.

With introductions complete, let’s dive right in, and I’ll turn it over to Lauren to get us started this morning.

Alright, I’m gonna turn it over to Jonathan to get started this morning, my bad.

Jonathan Perz: Well, today’s agenda, I think we’re going to focus, starting with May, we’ll focus on cybersecurity and looking at 2026 threat landscape. And as we do that, then we’ll, shift things to Lauren, who’s going to focus on securing AI, today, and then we’ll wrap up, and Brian’s going to take us through a good discussion on and question and answer session. Next slide, please.

So let’s talk about 2026 and what cybersecurity looks like this year.

Cybersecurity is not about if you are targeted. If you took a look at your sign-in logs, you might get a little bit afraid you are being targeted.

The question is how fast you’re going to be able to detect and respond if somebody is successful in one of those attacks. And that’s what 2026 is all about. It’s about being able to detect and respond. There’s some big macro shifts that have happened over the past year and a half or so.

I would argue identity. And what I mean by identity is your… how you log in, how you’re identified. That’s the perimeter now. It’s not necessarily your firewall. It’s getting into your accounts. Email is a big threat vector.

And logging into your Microsoft 365 accounts, your Google Workspace accounts, hackers are doing that with great success and great impact. And so we’ll talk a little bit about that as we go through. Speed and skill of exploitation are compressing. Ai is a big factor in that, but there are other factors, and we’ll talk about that a little bit.

Hackers are getting better at what they do, and thus it’s making it difficult.

And, you know, just the reality that most breaches begin with a compromised identity.

In other words, they’ve got your password, they’re able to get around your MFA, and they’re in your systems, your cloud systems in particular. And it’s not so much about a zero-day vulnerability, though that’s a big risk. The zero-day vulnerabilities are typically leading to ransomware. The compromised identities are typically leading to business email compromises and ACH fraud, so that’s what we want to talk about this morning. Next slide, please.

So what are we seeing? Where’s the damage actually happening in environments today? Well, what we’re seeing, as far as attacks go, and I’m going to explain these, is MFA fatigue attacks. That is, is hackers know if they can get your MFA

Now, let me back up a second. When computers, you know, communicate, they exchange what’s called a token, and that token is valuable, because that, that says, to the other computer, who am I talking to? Well, an MFA fatigue attack, what a hacker is trying to do is get your MFA token. And if they can get that session token, they can bypass a lot of security and continue to enter your account at will until that session token either times out or is refreshed. And so what they’re doing is they’re sending a lot of MFA signals, and eventually alerts, and you get those prompts on your phone, or on your computer, or your text message, and eventually you give the information, you approve it and they’re trying to bypass that and get that session token. And once they get that token, now the next bullet there, token session… Token theft and session hacking takes place and they will keep that. What a lot of people don’t know, for example, is in Microsoft 365, the default session token is 90 days. That’s the lifespan on a session token. So if a hacker can get a brand new session token, they can sit in that account for 90 days.

And the more patient hackers are going to have the greatest success. And so we’ve got to decrease that threat vector, but that’s what we’re seeing. We’re seeing hackers sit in there for a long time. We’ll talk about how to protect against these in just a little bit here. Credential harvesting is something else we’re seeing via advanced phishing kits.

The best way I can explain this is… If you ever click on a link on an email.And that link takes you to a login page, or open an attachment, and it takes you to a login page. That login page might not be… a actual login page. It might be an intercept.

Where a hacker puts a login page that looks identical to, let’s say, a Microsoft 365 page there. And what they do there is you enter your credentials, they send a copy to themselves, they get your session token, and then they forward it along to Microsoft 365’s login page.And so, to your eye, it looked like you logged into Microsoft 365, but what the hacker did was intercept.

That, and that’s all through a changed link.

And so, if you ever click on a link, you don’t want to directly enter your credentials after clicking on a link or opening an attachment in Microsoft 365. You want to go to the actual website where you’re supposed to be logging in, and then do it properly. This way, you know you’re not being intercepted.

Another thing you see is compromised accounts through malicious app permissions.

We are seeing hackers stay in systems by installing apps. This is establishing persistence. They’ll stay… they’ll install an app that they can log in through later in your Microsoft 365 environment, because a lot of companies haven’t configured Microsoft 365

To prevent that from happening, where users can install apps randomly, and give themselves permissions to log back in.

And these are areas in which

It’s important to understand that Microsoft 365, out of the box, brand new, is not configured to defend against these. The settings are there, but they have to be properly configured.

Ransomware remains active, and it’s adaptive. Yes, you can still get ransomware from an email, but that’s not your biggest threat vector. They’re finding… threat actors are finding vulnerabilities in your firewalls, in

And equipment that hasn’t been updated and patched, and once they can exploit that and get in to your network behind a firewall, they are deploying ransomware with great impact.

And… and still, we’re seeing companies that don’t have good backups, that don’t… that aren’t prepared for that kind of attack, that don’t have any kind of mitigation in place of that. And so, that’s something we’re going to talk about in just a minute, but that’s what we’re seeing right now. And we see these every day, every week.

And some are small impact, and some are big impact. Next slide, please.

And so, as we talk about this, let’s talk about business email compromise for a second, the business impact of identity compromise. Business email compromise. We’ve seen

I’ve seen a million dollars walk out the door because of a fraudulent email.

That somebody didn’t realize. A lot of times it’s related to a… what we call a typo-squatted domain. For example, in BMSS, we have seen hackers try to use and change… buy the domain BRNSS.

That R and that N in lowercase letters in an email address look quite like an M.

And so, it’s hard to identify. You really have to look closely at the email address of those people sending you emails to make sure it’s actually who they say they are. But business email compromise is becoming a big deal, because what hackers will do is get into an account, they’ll sit there and watch. And wait for the perfect opportunity to get in the middle of a transaction. They’ll start communicating.

As… as if they are one side? or both sides of a conversation to get an ACH, which ties into the next one, get an ACH or a routing number   changed, and if they can do that, then they’re going to intercept payments, and they’ll do it as long as they can.

And it all depends upon how big of a payment they get in. If they can get an accountant’s account, email account, or a CFO’s email account.

The Bay Days can be big, and we have seen that happen.

Vendor payment redirection is something else, sending payments to vendors. Construction companies suffer from this quite a bit.

Where a payment going to a vendor is intercepted by one means or another, because we don’t have good internal controls in place, or we’re lacking proper security measures, or people aren’t following policies. In other words, policies aren’t enforced. And that becomes important.

But hackers are using this and exploiting this with great impact. Malware distribution to clients? We don’t see it as much on a day-to-day basis, but it still happens. If they can get malware onto your system, that enables them to either log keystrokes or track various things, that becomes important as well.

And then, you know, the other impact that you see here, this is… and this is the… you can’t put a dollar sign on this, it’s reputational damage.

You know, if all of a sudden, some… an account in your…

Company has been compromised, and it starts sending out emaisto other people with malicious software on there, people are not gonna take your email. They’re gonna block you.

And that becomes reputational damage, and so it’s very important that everybody is vigilant about security when it comes to their email accounts. But reputational damage is that unseen damage.

That can take place, if you’re… and especially if it involves big transactions.

Regulatory exposure is something else people aren’t thinking about. Just about every regulatory requirement out there now is focused on cybersecurity. We’ve just gone through a series of helping, credit unions, because the NCUA… NCUA has been updating their requirements.

HIPAA is looking at updating their requirements. There’s… there’s… proposals on the table now, and they’re talking about, in May, pursuing those further. And so…

There’s lots of things happening. You need to be aware of your regulatory exposure. In addition, if an email account is compromised, regulatory-wise, every state now has different reporting requirements.

You’ve got to know what… you’ve got to kind of be familiar with the fact that if an email account is compromised, and there is sensitive information in those emails, or in that inbox.

or exposed by files, that could trigger reporting requirements, legal reporting requirements. And so, there’s a lot to think about there. But a Microsoft 365 account is not just an email problem. It’s a business continuity problem.

It’s not as simple as, oh, we just kicked the hacker out and it’s over. There’s a lot more to it now.

Next slide, please.

So let me ask… let’s ask some… pause for a second and ask some big picture questions.

Do I know who’s logging into my environment in real time?

If you’re a business leader.

And if you ask your IT team, could they tell you who is logging into my environment at any given time? Could they? It’s an important question.

How long can an attacker stay in an account without reauthorizing?

That’s another important question that I challenge you to think about.

If access, and here’s another big one, if access occurred at 2 AM on Friday.

Or, worse yet, if it’s a holiday weekend and hackers love holiday weekends, because they need a little bit more time to play, they know nobody’s looking at those computer systems, they’re out, you know, celebrating things.

If it’s a holiday weekend, that adds to the time frame.

Would you know before the weekend was over?

And would our first alert come from a client?

That’s not who you want to hear you’ve been hacked from.

And so, things to think about there, but it challenges you to just give pause as to what your cybersecurity environment looks like, and let me just say this, if you’re depending upon IT to manage your cybersecurity they’re doing the best they can, but IT’s focus is to keep the lights on, keep things running, keep things working, keep that network functioning, and that’s a lot of work. An immense amount of work.

You have to have somebody dedicated to keeping an eye on your security and thinking about the security concerns.

Next slide, please.

So, AI, is… a major cybersecurity concern. It kind of swept the world before cybersecurity was ready to manage it, and cybersecurity’s been playing catch-up on the AI front.

But the reality is what’s happening is AI is compressing timelines and skill sets. When I say it’s compressing timelines, exploit code appears within days of disclosure.

Ransomware groups are weaponizing vulnerabilities rapidly, because all they gotta do is use the same tools we can use. They hop into ChatGPT, pop things in there.

And they get that response as quick as ever, and it is sophisticated. They’ve even developed their own AI tools. There’s one a while back called WormBot, and it was focused purely on developing threats and attacks.

To be used in cybersecurity attacks. So, attackers are also using AI to scan continuously and automate targeting.

If we can use it, hackers can use it, and hackers can use it with greater effect, because the big difference between what we do and what they do is they don’t care about the law.

AI is also compressing skill set. Hackers are smarter.

what we used to call script kitties, well, with AI, their skill set has just been escalated, significantly and faster. We can’t count on

broken English in emails anymore, if they’re coming from overseas environments.

you can’t count on that, because they can just drop it in chat GPT, and now they’ve got a nice, perfectly formatted email.

And some of the attacks that they are… some of the emails they are sending as attacks are just… brilliant, and I hate to use that word to describe it, but they really are. They’re… they are doing little things to manipulate themselves through all the defenses that they know exist out there.

And so, it’s really a game of, and I think Brian Jackson said it best one day when we were talking about it, it’s moves and counter moves. AI has just changed the game a little bit. It’s a constant game of chess.

But there’s a lower entry point for threat actors, too, to be impactful. They don’t need to know everything. They just need to know how to use AI, and they’re there.

But that changes some things for us.

On the vulnerability front, quarterly patching cycles are outdated thinking. If your company is not patching regularly and consistently, and then keeping things updated.

you’re gonna run into trouble. And attackers, their effect… the impact of AI on cybersecurity has been tremendous on both sides, the offensive and the defensive side. Things to think about. Next slide.

So, 24-7 monitoring.

Used to be a luxury. Now, it’s a necessity.

Most organizations have few tools.

Few have continuous log congestion. Every device you work on creates logs, and those logs tell an important story. Somebody needs to be ingesting those, monitoring those, and keeping those.

Because that all provides what we call telemetry. Kind of think of a radar screen. Gives you the big picture of what’s happening all around you. And that’s the only way you’re going to be able to tell if hackers are in your environment a lot of times, because the logs indicate it.

And you need eyes on that 24-7. Correlated alerting is something else you need to think about. You know, that information is all taken, being brought together in one place, so you can have some correlation between that information.

After hours. Who’s monitoring your network overnight? Who’s monitoring your network on weekends?

That’s when hackers know we’re not gonna be at the office, and they… That’s when significant attacks happen.

Defined response playbooks, knowing how to react and mitigate that threat, or isolate it at best. For example, if an account is compromised, we can lock that account down, and the hacker can’t do anything else. The damage is stopped. Having 24-7 monitoring on that is huge.

And then, of course, the ability to contain. Ransomware can be contained. It can be minimized as it’s starting to spread.

You know, ransomware canaries is a powerful tool. For example, they put files in. If a file is encrypted, one of those ransomware canaries is encrypted, it’s… 24-7 monitoring through, properly done can allow that to be mitigated by isolating that device, so it can’t spread any further.

And so, again, having eyes on your system is a big deal. That, again, we say that alert at 2.13 a.m. Friday is useless if no one acts until Monday.

Cybersecurity is about closing that gap between compromise and containment. That’s called risk reduction, and that’s what this is all about. Next slide.

Vulnerability management.

Let’s not dismiss this. This is the discipline most organizations underestimate. Vulnerability management is more than patching.

It’s about risk prioritization. It’s about looking at your environment and have a sense of what vulnerabilities are there. I’ll give you an example of vulnerabilities a lot of people don’t realize.

Windows 10, for example, is a vulnerability in your environment right now. If you’ve got machines running Windows 10, Microsoft’s no longer updating Windows 10 and providing security updates.

Windows 7, Windows XP, we still see environments with those systems in them. Those are major vulnerabilities, those are hacker havens.

And so, it’s about looking at that end-of-life software, end-of-support software, and hardware. Those are critical ones. Keeping that hardware, the firmware, updated on the hardware in your environment is critical.

But income… but here’s where the gaps take place.

We don’t know what we have in our environment.

So, of course, we can’t secure it if we don’t know what’s exposed.

unknown internet-facing services. Login pages out there on the internet that are, a lot of times, because somebody pulls something out of the box, sets it up, that automatically puts a login page out there on the internet, and they didn’t turn it off, and a lot of times, they leave the default credentials there.

So it’s just admin, password, admin, and you’re in.

And hackers know that. That’s all readily available information.

There’s no context-based prioritization. So looking at the big picture, what are our biggest risks, and what are not as important? There’s no such thing as a zero risk.

Digital environment, but you do want to minimize that risk as much as you can.

Cvss, that… what that is, that’s a scoring system that is internationally recognized, and whenever a new vulnerability is discovered by a major software corporation, it is listed on the CVSS listing, and a lot of times, people just use that to make decisions. But there’s more to it than that, because what they might characterize as a high risk may not exist in your environment.

Or, it may exist, but it’s not as high as another risk that you have in your environment. So again, having all this information helps.

And then, of course, the last thing is… Who’s remediating? Who’s cleaning up these vulnerabilities in your environment? If you have no validation of remediation. You don’t know if they’re being cleaned up, and they can still exist. Next slide, please.

So, when we talk about this, you gotta have continuous asset discovery. Need to have risk-based prioritization. What’s being exploited in the wild?

Versus business… in addition to business criticality. Clear remediation ownership.

And then compensating controls. Sometimes a patch can’t happen. I need that Windows 7 machine because there’s some data that only can be accessed on software that isn’t being updated anymore. And so, we can build some compensating controls around that, but identifying that and isolating that device so it’s safe.

is critical. And then validation and executive reporting. Leadership should know what’s happening on this front.

But attackers automate discovery, so your defense must be at least as disciplined as that.

Next slide, please.

There we go. And so the last thing I want to talk about for a second is AI and cybersecurity, and we’re going to transition to Lauren here in just a second. AI is an acceleration engine.

Attackers use AI to improve phishing realization, automate reconnaissance, scale social engineering. Social engineering is those attacks that take place in a variety of different ways.

In terms of they can call you and pretend to be somebody they’re not. Gift cards, that’s a common social engineering, component. And, but they use social engineering to gain information.

they need to access your systems. They evade traditional detection patterns. They’re finding ways around even robust security platforms that have been around a while.

Now, defenders use AI to improve anomaly detection.

to enhance, the Security Operations Center triage, and to approve risk prioritization.

But here’s the big thing.

There was some software, I guess about 2 months back, that was created, brand new stuff, they were touting it everywhere, AI software, AI-driven software that was supposed to help cybersecurity. Well, within 2 weeks.

Attackers have weaponized it.

That’s the game we’re playing right now. AI can be a business multiplier, but we cannot ignore the real threat to your organization in 2026, that exists with AI.

AI does not eliminate governance risk, it multiplies it. And that serves as a perfect transition to turn things over to Laura, who’s going to talk about that. Lauren, who’s going to talk about that.

Next slide.

Lauren Pankey: All right. Hey everybody, my name is Lauren Pankey. I joined Abacus back in October, where I was previously at another accounting firm for 6 years.

I’ve been involved in and performed 500-plus stock audits spanning across

multiple industries. Here at Abacus, I’m on the InfoSec team, where we conduct SOC audits, compliance services, AI governance for clients, and we’re currently in the process of becoming an ISO certification body.

Next slide, please.

So let’s talk, governance.

So, governance used to be a nice-to-have.

In 2026, it’s the only way to scale AI without losing control.

Today, we’re moving from abstract ethics to hard policies.

The threat landscape isn’t just hackers anymore, it’s, shadow AI. If your marketing team is using a random AI tool to summarize client meetings, your data is already outside your fence.

And, governance is the fence. So…

Some of the risks we’re seeing today are agentic risk, data leakage, and shadow AI risk. And so, agentic risk is an AI agent acting autonomously that can bypass traditional firewalls.

Data leakage risk is a huge one. This is the one we’re probably seeing the most.

And it’s where employees feed their feeding, code, or PII, into the public models.

And then shadow AI risk is the rise of unvetted AI tools used across departments without IT oversight.

And so, Agentic Risk, you’re probably asking, what is Agentic AI, and what are AI agents?

So, AI agents, these are… they handle a single, well-defined task on their own. They’re not designed to manage complex tasks or adapt to changing

business conditions without an agentic, unifying layer. Example of this could be a personal shopping assistant or a 24-7 chatbot.

And then Agentic AI is the system that coordinates many of these to execute broader, multi-step workflows that span teams and systems.

Agentic AI adapts on the system changes. So, example of this, you could look at this, like, Uber’s enhanced Agentic rack.

Next slide, please.

So what exactly is Shadow AI, and how does that relate to data leakage?

So, Shadow AI is the unauthorized, unapproved, and often unmonitored use of AI-powered tools, applications, and services within an organization, usually by employees attempting to increase productivity without following IT security protocols or company policy.

Some employees may enter confidential, proprietary, or personal data into public AI models, which can then be used to train those models further, exposing the company’s information. Some examples of shadow AI is using free or personal accounts.

For tools like ChatGPT, Claude, Gemini, or specialized AI coding assistance, without or in violation of company policy.

I would say a mass majority of AI usage in many enterprises occurs without IT oversight, often driven by employees looking for quick, efficient solutions to everyday tasks. As humans, we’re curious.

we naturally want to see what these tools can do, how they can better our lives, how they can save us time. So oftentimes, employees may just throw in customer contract documents containing customer data without thinking twice about what happens next.

those AI public models could be using that same data that we’re throwing in these models to train their models for enhancements.

And to manage shadow AI, organizations are advised to develop clear AI usage policies, educate employees on risk.

provide approved, secure alternatives to shadow tools, and this is why it’s so important to have governance around AI. It’s just because a lot of the times, we don’t know

what these tools are doing with our data. We don’t know how they’re training their models.

And we’re just unaware of that.

Next slide, please.

So let’s talk governance frameworks.

Two of the main ones, ISO 42001 and NIST AI RMF.

There’s a ton of AI governance frameworks out there currently, but there are only a few that are taking the lead right now, and what we’re seeing most companies doing currently to help provide some security and governance around AI

ISO 42001, that’s kind of the new gold standard for AI management systems. It provides the first international certifiable and comprehensive framework for governing the entire lifecycle of AI technologies.

It also allows organizations to manage AI-specific risk.

Such as bias, security, while fostering ethical, responsible innovation. And unlike voluntary frameworks, which is NIST, AI, RMF, ISO 42001 is a certifiable standard.

Meaning, accredited third-party auditors can verify an organization’s compliance.

And it also covers AI from initial party on… from conception to development, deployment, operation, and decommissioning, ensuring continuous oversight.

Also, 42001 follows the same high-level structure as ISO 27001, which, if you haven’t heard of that, ISO 27001 is the information security rule, and then also ISO 9001, which is the quality management rule.

So it makes it easier for companies to integrate AI governance into their organization.

And then NIST AIRMF,

This is, not a certifiable certification, but it provides a flexible, systematic approach to building trustworthy, responsible, and ethical AI systems that you can build and follow in-house.

But it is just a voluntary framework. It offers an approach to identify, assess, and mitigate those risks, such as biasness and security threats.

While defining clear roles and responsibilities for AI, it also helps companies prepare and align with emerging global AI regulations.

And then the last one, audit readiness.

Audit readiness is essential for AI governance and integrating AI security into existing compliance cycles because it ensures systematic risk management, it builds stakeholder trust.

And it also helps to maintain legal and regulatory compliance in the rapidly evolving landscape of AI technologies.

AI introduces unique risk, typically covered by traditional security frameworks, such as data bias.

Explainability issues, model manipulation, so that could be, like, adverse attacks, and audit readiness forces organizations to proactively identify, assess, and mitigate those specific risks, ensuring a robust security posture.

Next slide, please.

So, the blueprint, establishing a core AI policy.

If your company’s using AI, which I would say most companies are at this point, establishing an AI acceptable use policy is crucial, and it should include, at a minimum, data sensitivity procedures, ethical guardrails, and control and accountability.

So your acceptable use policy should be your first line of defense.

So you kind of need, like, a traffic light system. So defining green as in allowed, yellow, restricted, and red as prohibited. If it’s an internal private instance of a model, it’s green. If it’s a public, free version of ChatGPT, it’s red for sensitive data.

An artificial intelligence acceptable use policy is critical for governing AI by mitigating those security risks. You’re ensuring data privacy, fostering ethical, compliant usage of AI.

It also helps to define those approved tools, prohibits inputting confidential data into the public models, and sets clear accountability.

I would say key components of… key components include data handling rules.

Allowed tools, training, and enforcement procedures. This can help reduce employees by uploading sensitive info into public AI models. It helps to promote fairness and helps to identify biasness.

A lot of employees don’t know how to identify the biasness. It should define who is responsible for AI outcomes and decisions, so employees know who to go to and report issues to for AI usage.

And the policy, at a minimum, should include, you know, the scope, authorized users.

approved AI tools, data handling guidelines, prohibited tools and uses, human-in-the-loop requirements, reporting procedures, compliance and penalties, and training and education. And training and education being one of the most important, I would say.

Next slide, please.

So let’s talk the engine, operational procedures and processes.

So, in our assurance work, like SOC 2,

We look for procedures and processes.

So you need an AI use case inventory, which is a mandatory registry for every AI tool used in the company.

You can’t govern what you haven’t listed, and every tool needs to go through a risk assessment before a single employee logs in.

So, an AI use case inventory, it’s a centralized structure repository that catalogs all artificial intelligence applications, projects, tools within your organization.

It documents key details like AI’s purpose, the data, what is the criticality, the stakeholders, and the risk, serving as a foundational tool for responsible AI governance, transparency.

and strategic management. It also allows organizations to track, manage risk, and ensure AI systems align with regulations, and provides visibility into how, where, and why AI is deployed. And it helps to identify and mitigate those potential biases.

Lauren Pankey: And then AIA, which is a formal procedure to score the risk level of a new AI tool before deployment. This is a structured, preemptive review process used to identify, evaluate, and mitigate those risks and bias and societal impacts.

of automated decision making before we are deploying those AI tools.

It can act as a governance and ethics tool, covering fairness, accountability. I would say these are pretty crucial for ensuring AI safety and building that public trust while meeting regulatory compliance and preventing harm to individuals.

Conducting these assessments can also boost confidence in your organization and among the users that are using these AI tools.

And then Human in the Loop, H-I-T-L is the acronym. These are procedures requiring a human sign-off on high-consequence AI outputs.

So this is a collaborative approach where humans actively participate in training, tuning.

And testing machine learning modules, as well as reviewing their decisions.

It’s critical for improving accuracy to reduce the biasness in the AI, ensure ethical safety, and provide context the AI cannot understand alone. And AI is not always right, so this is one of the more crucial ones.

The… the output of the AI always needs to be reviewed, because sometimes it is not right.

Next slide, please.

So, third-party AI governance. So there’s a few key points I wanted to touch on. Vendor due diligence, model provenance, and contractual liability. So, most companies, I would say, don’t build their own AI. They buy it.

So your procurement procedures must change whenever you are buying AI and using third-party SaaS tools. You need to ask these vendors how they train their models. If they can’t tell you how they train their model, then you shouldn’t trust it.

Vendor due diligence. We should be updating our SOC 2 and ISO questionnaires to ask specifically about model trading data, and these third-party AI tools often pose significant

Security and privacy risk, because they process sensitive data outside direct control.

Governance ensures vendors comply with regulatory requirements, so you’ve got GDPR, CCPA, and a lot of other European,

regulatory requirements and security standards before, during, and after using AI to analyze those documentation for faster and more consistent automated and continuous monitoring, rather than just periodic assessments.

Model provenance, so we should definitely verify where the data came from. Understanding how a vendor’s AI model is trained, the data sources they use, and its limitations is crucial for identifying potential biasness and ethical issues.

And then lastly, contractual liability. It’s important to know if vendors assume liability for model hallucinations or security breaches.

Since legal responsibility for AI outputs often remains within the company deploying the technology, not the vendor, strict contractual clauses are necessary.

Governance frameworks help to define that liability for model failures, intellectual property issues, and data breaches.

Next slide, please.

So, controlling the keys.

Liability follows the human. So if an AI makes a mistake, the company is still responsible. So that’s why we need to have identity management for AI. We shouldn’t give an AI agent global admin rights. We give it the bare minimum access it needs.

We need to be treating AI agents like a… like a user with a unique ID and limited permissions.

Identity for AI, lease privilege, lease privilege, audit trails, and kill switches are essential for AI governance to manage those risks. We’re ensuring accountability by doing that. Maintain control over autonomous systems.

And these mechanisms prevent unauthorized access. They secure sensitive data, provide mechanisms to immediately halt rogue or malfunctioning AI agents.

Acting as a critical external security layer.

And limiting those AI agents to the minimum necessary access prevents them from becoming super admins. They can cause catastrophic breaches or share sensitive data with unauthorized parties.

These logs, the audit trails, they demonstrate, rather than just assert that security controls are effective, and then the kill switch, everyone should have a kill switch or a formal procedure for emergency deactivation of a rogue or compromised AI system.

So, future trends, AI governance. There’s a few topics I wanted to touch on. Audit your shadow AI Now, continuous auditing, and Gross Functional AI Council.

So the future is governance as code. We’re moving away from telling people what to do and moving towards systems that prevent them from doing the wrong thing.

Don’t try to solve AI governance in a day. Start with your inventory, know what tools your team is using, and then wrap a policy around it.

Especially an AI acceptable use policy. Governance is more of a destination and not a journey, so it’s important to identify every AI tool that currently touches your corporate data.

So that includes your browser extensions, your personal accounts, and free trials used by Teams.

You can’t govern what you can’t see, and for continuous auditing, it’s important that we’re shifting from a once-a-year audit to real-time compliance monitoring.

So, organizations, we need to embed robust model testing, validation, and ongoing assurance for every AI system that we develop or procure, and continuous evaluation for accuracy, fairness.

Explainability and compliance alongside clear human oversight at every stage will be essential.

Cross-functional AI Council.

It’s important to have one of these in your organization as a dedicated council mitigates risk, such as bias and security breaches. It creates necessary guardrails for ethical use, and it provides the structured oversight needed to scale AI safely while accelerating innovation.

Key reasons for an AI Council include regulatory compliance, legal risk.

You’re ensuring compliance, avoiding documentation gaps, and costly litigation. You’re managing Agentic AI to manage those guardrails, trust, and ethical standards to ensure systems are fair and trustworthy.

and also risk mitigation for data breaches, which can, help to manage the hallucinations within the AI, harmful outputs, and lastly, strategic alignment to bridge the gap between leadership and technical teams.

Next slide.

All right.

We’re gonna leave it to any questions.

Brian Jackson: Thank you, Lauren and Jonathan, for,

Great presentation, a lot of good information. A couple of things as we look at these topics that I wanted to really address individually, or even both of you, as some items that I know have come up in our

Brian Jackson: With our clients and many conversations I’ve had, even outside our clients, and I want to start with Jonathan, really. You know, Jonathan, I know you deal with a lot of real-world incidents. You know, in the ones you’ve seen, you know, and in your presentation, you talked about, hey, the delay between initial compromise and then, you know, actions you have to take for containment of a compromise. What usually causes that delay?

I mean, how do you… you talk about the compressed timeline, you know, how can we minimize that timeline between, hey, compromises happen and containment, you know, how do we get better at determining, you know, what’s happening and if there’s a problem?

Jonathan Perz: Great question, Brian. You know. In my experience, what I’m seeing time and time again is the big gap between those who know and those who don’t know. They’ve been attacked, and then a hacker is in their environment.is monitoring. The tools, having some tools in place, having some settings configured, actually reviewing sign-in logs, actually being notified that there’s a… potentially, there was a suspicious login on this… on this account. For example, a big one would be, you know, if you don’t get any notifications that indicate that, you know, let’s say Brian logs in from Birmingham, Alabama, every single time, and then all of a sudden there’s a login from Turkey.

You want to be notified of that if it was a successful login. That’s going to tell you, that’s going to close the time gap, because the sooner you can find that out, you can go take a look. It may be an anomalous login. Maybe Brian was using a VPN that day. Wanted to look at some TV in Turkish. I don’t know. But you won’t know unless you look at it.

You know, unless you’re monitoring those things, that’s the big time gap. Almost universally, they have no… nothing to tell them that…

There’s… there’s something suspicious happening.

Brian Jackson: That’s where cybersecurity steps into the picture. You need to have some monitoring tools in place, and some good configuration, and then having somebody actually looking at those reports.

Yeah, I’ll get Risky sign-ons is, I know, one that we can set up to monitor. What are some other things maybe we can monitor?

Jonathan Perz: We can log… we can monitor risky sign-ins. You know, staying on top of your vulnerability management is critical. You know, if you don’t know your end-of-life software, for example, there… I imagine there are still folks who didn’t know Windows 10 was end-of-life.

And so, end of support, I should say. And because of that, they may be running Windows 10.

systems in their environment. Knowing, having knowledge of your environment and understanding where the gaps might exist is critical, but that would be an alert that… that would be something we’d want to be paying close attention to, making sure those big, easy… those are easy things to fix.

You know, but hard to fix after the hacker exploits them.

Brian Jackson: Yeah, and one thing I would say, a lot of these alerts, logging, and items can be configured in Microsoft 365 out of the box. I mean, there’s very few extra things you have to do to 365 to have all of these safeguards in place, and a lot of times, they just have to be configured. I think that’s one thing that we often see. They’re just not, you know, hey, the client has access to the logging, the alerts, and, you know, setting up certain guardrails, and they’re just not in place. I mean.

We’ve seen that many times in incident response cases that we’ve faced overall.

3Jonathan Perz: Brian, I would add to that, just to use what you have principle.

Brian Jackson: Yes.

Jonathan Perz: That’s what you have. You have it already, you’re paying for it, you’re just not using it.

Brian Jackson: So, what signals, you know, outside of alerts, what signals usually indicate, you know, your identity? You talked about, hey, identity is a new perimeter. What signals typically indicate an identity is being compromised, and how can I maybe catch that before, you know, some kind of financial damage occurs?

What are the early warning indicators?

Jonathan Perz: Here’s your biggest one, if you’re not using multi-factor authentication and enforcing it.

It’s not a silver bullet, it doesn’t stop everybody, every attack. I prefer to refer to MFA as a… I call it multi-factor alerts, and here’s why.

Because what that tells you is if you get that alert, any one of your users gets an alert, an MFA alert, that tells them, somebody logged into my account.

And if it wasn’t me, who?

that’s an automatic indicator. If I didn’t trigger, if I didn’t enter my password somewhere, somebody successfully entered my password, and if it wasn’t me.

That’s a red flag. That’s an automatic indicator. You need to call your IT team, your security team, get them involved, and let them go look at your sign-in logs, figure out what happened there. That’s the perfect time. That’s a small example of something that’s just easy.

To tell you that your identity has potentially been compromised.

Brian Jackson: And I would say, adding on to that, you know, we don’t want financial damage to occur. We always try to be on, you know, the proactive end with our clients. And I would add to that, that if you are, if you continually do business with a vendor. or a client, and all of a sudden they ask you to change any type of financial, transfer information, whether it be a routing, account, wiring instructions, that should be an automatic red flag. That, hey, you need to contact them, you need to reach out to them to make a phone call.

And I would say do not use the number necessarily shown in their signature either. Go outside of that, because we have seen threat actors actually modify the signature of compromised accounts with their own phone number, to where when you do call, you’ll actually get the threat actor, and they’ll say, yeah, it’s okay. So just make sure you have good contact information, and look for, you know, keys like that that may tip you off to some kind of

You know, potential financial issue that is happening.

Moving over to AI, you know, I’m gonna sort of hit two areas here, but, you know, from a security side, you know, what steps can clients take to secure AI platforms? And let’s keep it simple. What about ChatGPT? What about Microsoft Copilot? What are some things that clients can do to help secure these platforms? And this is really a question for both of you.

Lauren Pankey: You know, I would… I’ll start off here, Jonathan, by saying… if, you know, everybody, pretty much everybody is using AI. If they’re not using a business account version of ChatGPT or Microsoft Copilot, they’re probably using a personal account.

So I would say the most important thing right now is, you know, setting up security measures, setting up a business account.

for your employees, so that they’re not using personal accounts. Because then, on the back end, we can configure and secure that chat GPT or Claude, or Gemini LLM. You can configure what data can be inputted into those LLMs.

You can configure access and security permissions.

So I would say if that’s not in place now, that would be the next thing I would look to do, is to have business accounts instead of your employees using personal accounts, because you can’t govern, what is put into a personal GPT Cloud Gemini account.

Jonathan Perz: You know, Brian, I think you touched on two things there. One, the security settings. Use what you have. There are security settings behind both of those applications, Copilot and ChatGPT. I’ve been using ChatGPT since the early days, and one of the things about it is they didn’t have nearly as many security settings in the early days as they do now.

There’s a lot you can go and configure, and a lot of it’s checkbox configuration.

And so, by all means, go in there and look at your settings. It goes… that goes for any app you’re using, social media, otherwise, because just about everything is bolting some AI component onto it now.

And you gotta be thinking about that, you know?

But that’s a big deal in terms of…

The other thing I would, you know, that’s one big component that’s an easy, easy fix. And if you’re not sure what to look for.

somebody is. Just reach out. There’s plenty of help out there to tell you what to look for in that. You know, the second point of that I would make about that, Brian, is, you know, Lauren touched on it, in that

once that AI goes… once that… if people are using their personal

AI components, that takes your data, your business data, your client’s data. If they put it into their personal, it takes it outside your governance.

And now, like Lauren said earlier, it’s outside the fence.

And there’s not much you can do about that. And if they mistakenly put something that’s out there real sensitive, and they don’t have security configured on that.

that’s… that’s… that could lead to significant legal implications, and so you need to be… it’s really an important thing, creating some governance. Better to buy it for them if they’re using it than not.

Brian Jackson: Yeah, I know one of the first things we did as a family of companies

You know, we have been an early adopter of AI, and I think it was very smart of our management team, hey, we’re just gonna create business accounts and, you know, set up a process, and I think that was very, very, very strategic thing to do, that, hey, you know, we’re gonna give you an account, because we want you to use it. I mean, I think that’s the balance we have to hear. Hey, this is a very innovative technology.

However, there’s a lot of security, you know, items we need to consider with it, and I think that was something that we did up front, is, hey, we’re gonna give you a ChatGPD account, we’re gonna pay for it under a business model and a business premium account, but you do have to agree to the training, so everyone in our organization, if you want the account, you have to go through training, and certify that, and then you also, will have to

request access from our IT department along with that, and you also have to agree to the policy. So, I think we put 3 good governance items in place right up front, and… and we’re seeing a lot of clients follow that model, some not, but a lot of times when they start using AI, and they… hey, it’s happening, it’s here, I’m sure you’re… I think we’ve alluded to this, but whether you’re governing it or not, your employees are using AI.

in some form or fashion, and access to it is easy. I’ll say on the co-pilot side, which is the one I think.

when you go from ChatGPT and you start deploying Copilot licenses, security becomes a lot more critical with Copilot because it just starts

You know, once you enable the license for the user, I mean, they have access to hundreds of connectors, they have access to integrations, and there’s a whole, you know, list of security items you need to consider with Copilot, because now it has access to SharePoint libraries, it has access to Outlook, you know, pretty much Teams, any program you’re using within the Office platform, Copilot will now have access to that.

And I think that’s one of the areas I see, hey, security being extremely important as we look there.

Jonathan Perz: Ryan, can I piggyback on that?

Brian Jackson: Absolutely.

Jonathan Perz: Yeah, no, there’s a big deal on that particular front with Copilot. Once you turn it on.

You’ve pretty much exposed all your data, and if your data wasn’t organized

and segment it in a good way. For example, you don’t want everybody in the company knowing what everybody’s payroll is, and if that, you know, salary is, and if they can tap that question into AI, if there’s no data segmentation in place, AI is going to answer that question.

AI doesn’t care. AI’s gonna answer that question, and now every… and you’re gonna have problems because of that. And so having good data segmentation in place, and data labeling in place is important. You kind of got to manage your data a little bit before you turn AI on.

Brian Jackson: Yeah. And shifting a little bit more on the third-party risk, I know it seems like 6, 7 months ago, everybody had an AI tool, right? And all you need is a credit card to get access to the AI tool.

So, I know, Lauren, you know, you have been intimately involved in third-party risk management, especially in our firm, as we’ve looked at other AI products. Talk a little bit about the process that you go through and how you vet these third parties, maybe some of the questions you ask, and maybe some of the red flags.

Lauren Pankey: So, there’s, there’s a few topics we like to cover with these third-party vendors, and we’re aligning our topics with AI ISO 42001 standard.

Because that’s the only really certifiable standard right now for AI. So governance, policy, leadership, so we ask these vendors, you know, do you have an established AI governance policy? Who’s approving that policy, and who on your side is responsible for AI decision making?

Do you have an AI life cycle? So, development, deployment, monitoring, and retirement of your AI agents?

What are you training… what are you using to train your models? That’s…

That’s a big one. I would say the biggest one, because you, as an organization, don’t want these AI models to be using your data to train their models. So asking, what are… what type of data are you using? Is it in-house data? Are you using customer data to train those models?

And then we kind of go into data management and privacy. So, does the vendor classify data according to sensitivity and regulatory requirements?

What data’s used to…

fine-tune the AI model, and is that defined in a policy? And then, how are you handling

data once… customer data once that customer leaves, or is no longer using your services? Is it instantly deleted, or are you keeping it for, you know, a certain amount of months, or a year, and using that data to further train your model?

Do you have data retention requirements?

And then we kind of go into security and access controls. So, you know, where is the data stored? Is it on-prem? Are you storing it in a cloud software like AWS or Azure? And then are those, cloud environments protected via MFA and encryption?

So we kind of go into, you know, data access, storage, encryption mechanisms, AI acceptable use policy, governance, and those topics, and then it’s really about transparency and communication, which will come from the contract with the vendor.

Brian Jackson: Great answer. Yeah, I would say…

we’ve always had a good risk program for looking at software-to-service companies, but now, as they’ve been integrating AI into those platforms, that conversation just extends out. And I would say a lot of times, hey, are you doing penetration testing? Do you have your SOC? You know, there’s a lot of things we normally ask, but now, I think from what Lauren said, you’ve got to extend those conversations past those normal

those normal, you know, rote questions we always ask to really assess the risk of these AI tools and how they integrate your systems, and also how they’re using your data. A couple questions did come in, just want to talk through, especially one of them, but, you know.

obviously, this technology brings a lot of efficiency to, to the forefront, and… and people want to use it. I mean, it is great technology. I use it, I know we all use it quite a bit. You know, this question came in the form of, from a not-profit, non-for-profit.

you know, obviously limited budgets, they’re relying on volunteers. How would… what are some recommendations we could give them for deploying AI in their organization?

Lauren Pankey: I would start off by saying, take a subset of employees, you know, I’ve seen a lot of companies, send out

questionnaires, you know, are… are you using AI for personal use? If so, what AI tool are you using? And take a subset of those employees that answer that question, and set up a business account

for them, either Copilot, chat, GPT,

See how they’re using it, ask them how they’re using it.

How it could, you know, better their daily, work task.

And then go from there. But a policy needs to be in place after, you know, you’re educating and training those employees. Put in a policy, you know, what’s prohibited, what can be done, and those types of things.

Brian Jackson: Yeah, and another question came in, and very similar, but, you know, you think about small businesses, they want to also take advantage of this technology. You know, they obviously… they do not have access to enterprise resources, and I think they could take a similar approach to deploying AI in their organization. I think you gotta find the individuals who will work best with the technology, maybe they’re already using it, and maybe just start simple.

I always recommend clients start with ChatGBT. License is relatively inexpensive.

It’s pretty easy to secure on the back end, so it doesn’t take a lot of security effort, because there’s just limited controls of what you can and cannot secure on it, including connectors. And also, I would recommend starting with the small… if you’re a small business.

start with ChatGPT, start with that subset of employees, just like you would if you’re a not-for-profit. You know, start getting your feet wet with AI using that platform. It’s the easiest to use.

Prompting’s also very important.

Brian Jackson: And then, you know, making sure you’ve… I think it’s important you find the right people. Not everyone works well with AI. I think you’ve got to find someone who, you know, is invested, wants to use the technology, and then also, you know, is very good at interacting with it. So, last question we had on here…

Yeah, go ahead, Jonathan. Yeah. On both of those questions, you know, the security side of that question, I think, becomes important, too.

Jonathan Perz: In terms of, there are a lot of grants out there.

for nonprofits, government even has driven grants for nonprofits. There’s a lot of money out there to be had to bring in that you can use to implement security measures, whether it be cybersecurity generally.

or AI security.

At large. And another thing I would… you can’t underestimate is what’s freely available out there in terms of training.

find some training that makes sense to your company, that’s freely available, and utilize it if you have limited resources. But don’t send your people out there to use AI without giving them some training first, because you’re… they’re gonna do things anyway.

that are going to be outside the scope of what they learned, but give them some basic training, in particular, on the security components of AI, and the risks, and what they plug into AI. Don’t just leave them to their own devices, because…

You’re gonna have data spillage, data leakage.

And that’s where the big risk is. You know, that’s a big part of the risk, as Lauren pointed out earlier.

Brian Jackson: And one thing I want to get down to, really, one specific piece of technology, and that’s these AI note-takers. A lot of times, we’ll join a call like this, or a Zoom call, and you have these AI note-takers. From a security perspective, Jonathan would love to hear your thoughts on it, and then Lauren, from a governance perspective, would love to hear your thoughts on it. Jonathan, you go first.

Jonathan Perz: Alright, security for AI note-takers. Here’s the thing, they’re a dime a dozen, and they’re popping up everywhere. Matter of fact, a lot of times, I’ll have AI note-takers start meetings, Zoom meetings and team meetings.

And nobody else is there except me and the AI note-taker. And so I’ll, you know, because I’m a wise guy, I’ll have a conversation with the AI note-taker just to have some fun and leave some interesting stuff behind. But you know, at the end of the day, you have to think about the reality of.

Brian Jackson: How are those configured?

Jonathan Perz: Alright, so it’s taking that data. Where’s that data going?

And they also have to think security is, in part, about ethics. And this kind of will trigger a lean into Lauren’s response, I’m sure, but there’s an ethics to that. Was that notetaker invited? You know, did… if the information in that meeting is sensitive, do you really want an AI note-taker in there?

And is it kind of presumptive to invite your AI note-taker into meetings? You know, and these things are a dime a dozen.

My wife had to use an AI note-taker

to read somebody’s notes from a meeting. And… but when she logged in, that AI note-taker put its hooks in everything. And all of a sudden, every meeting she went to, now she had an AI note-taker coming along.

She had no idea how it got there. It wasn’t permitted.

And it took a little bit of a lift to get that completely cleaned out.

And so, you have to be careful with it. There are security implications there.

Lauren Pankey: Yeah, I think there’s, you know, with these note takers, there’s severe risk involved.

Especially with data security and confidentiality. On the governance side, I would say, number one, I’d go back to implement an AI acceptable use policy. You can define in that policy which tools are banned, which ones are authorized.

And for what types of meetings. I would say that’s very important, is defining what types of meetings those tools can be used in. So, you know, if it’s a

executive or legal level session meeting, I would say those are… should be banned in those calls. We should be vetting the vendors.

There’s tons of those note-taking AI tools out there. There’s Otter, Firefly, Read AI. We should be vetting those vendors in terms of

this service to ensure they do not train their models. And then also, lastly, establish a data retention policy for those tools. You know, are they keeping… how long are they keeping that data after the meeting ends?

Does it stay there for, you know, a week, a month, a year? How long do they keep that data? And then just in training employees in general on using unauthorized, free AI tools that lack corporate-grade security procedures.

Brian Jackson: And I think one thing that’s coming down the line that we haven’t mentioned here is what are the legal ramifications of these AI note-takers? Is, you know, is the summary of the meeting, or, you know, what that AI companion consumes, is that discoverable? You know, can that be used against you, or could that help you in a legal format? I think that’s some areas that

I haven’t seen a lot of opinions on yet, but that’s something I think about is, you know.

at some point, you know, could that be subpoenaed, could it be used, or is that information discoverable? I think the data retention is extremely important, Lauren, thank you for bringing that up. I know when we evaluated Zoom, we wanted to see, well, you know, how long does it keep, you know, the data? And I think it’s very clear in their policy, I think it’s…

I think it’s about 30 days, they keep their summaries live. Also, they’re not always accurate, and I think you have to really look at the accuracy of the AI as it translates to these meetings. Again, great tool, can be really well, if you missed a meeting and you want to know what happened, or maybe you can’t attend it, hey, they’re great for that, but hey, just like anything with AI, we have to put some parameters and governance around it, and thought into it.

Just because these… we know these services are not perfect.

Wrapping up…

Jonathan Perz: And from a security perspective, if you’re leading a meeting, and you’re responsible for that meeting, and an AI note-taker shows up, don’t hesitate to boot the AI note-taker if it wasn’t invited.

Brian Jackson: Yes, and you can do that.

Jonathan Perz: Have that… have that kind of control. Realize you have that control, you have that responsibility, as a matter of fact.

Brian Jackson: Yes.

Jonathan Perz: what the meeting’s about, and if that AI note-taker wasn’t invited, boot it, because some people just bring it along.

Without even asking.

Brian Jackson: And I see no problem if you go into a meeting asking the host to disable that technology. I mean, there’s nothing wrong with doing that as well, and I’ve seen that happen in a couple meetings. You know, shifting… last thing here before we wrap up this morning, what about training?

who is… who should be training these employees on how to use AI? And, you know, what does training look like from y’all’s perspective?

Lauren Pankey: I would say that part of the training on what it should include, it should include, you know, how to use it.

day-to-day tasks and, just things that are prohibited that should not be allowed to be inputted into the AI. Things that are allowed.

There should be ongoing training. Because AI is not static, it’s constantly, every single day, it’s getting smarter and smarter and smarter. So the training should be continuous to keep up with evolving technology.

And who should be giving out that training? I mean, I don’t think AI training should be

limited to a scope of employees. I think everybody needs to have that training. Executive leadership, HR, and L&D teams.

It, tech, internal teams.

You know, everybody, because we’re all using it, but we’re all using it in different ways.

Jonathan Perz: You know, I would add to that, and I agree with everything Lauren said, especially just the fact that everybody needs it. You know, I would make a strong argument for leadership needs it, perhaps even more.

than everybody else, because, you know, they’re… they’re always pushing the edge, they want to find out, it’s a great… and their schedules are just jam-packed, and being able to have tools to help me, you know, be more efficient is just powerful. But, again, sometimes in the effort, in the, in the…

In the name of convenience and efficiency, we sometimes forget security. And having that security training… security, at the end of the day, is everybody’s job. Everybody should get it, and who should be doing the training? Well, somebody who’s been trained.

there’s a lot of training out there, but somebody has been trained or used a validated, well-documented source, and I think Lauren hit on one thing. Make sure it’s current.

This is an evolving landscape.

This is… this is evolving faster than we can shake a stick at. It’s just happening, and we can barely keep up on the security front in terms of how it’s moving, and so…

Have somebody who’s staying on top of it. Appointing champions in your company to be over this, or a team or a committee over this, is invaluable.

Brian Jackson: I think.

Jonathan Perz: To be monitoring that.

Brian Jackson: Yeah, I think that’s important. I mean, that’s something we’ve done. We have a… we had an advisory committee in BMSS that is made up of multiple practice leaders. We have one person that champions that group, and it’s great. We meet every so often, talk about challenges, we talk about risks, we talk about a lot of different things.

you know, when I look at training, I know some of the ways we’ve helped clients with training has really been two areas. One, like Warren mentioned, users, management teams, hey, what is the application of this AI? How can your employees use it to make their jobs easier, more efficient, and how can it be applicable? And I think on the other end, we’re also training IT teams. You know, this… because this technology is moving so quickly, we have

Worked with many directors of IT, you know, security managers, just to get them up to speed on, hey, what are the security parameters and things you need to be thinking about as you, as, you know, really deployment of these tools is demanded by the management team. You know, they want these tools out there, we want to be innovative, we want to take advantage of them. There’s a little bit of fear of missing out if your staff is not using these tools, but how do you do that securely?

And I think those are two ways we’ve definitely helped, those out. One other question came in, and this is more about personal use, and I’ll answer this one really quick, but I know, you know, personally, we find a lot of these tools, very useful, and I think there are security risks with them. And I think,

if you’re going to use it personal, I think you can apply the same business rules we’ve talked about today. You know, I would say use ChatGPT. That’s the one I go to all the time. That’s the one we use in my family, that I push my family to. If they want to learn how to use AI. It’s just really easy to deploy, control, and secure. So I would, I would definitely say, hey, you know, adopt one, you know, personally, if you want to use it, it’ll probably align with your business as well.

Jonathan Perz: You know, Brian, on that front, one thing I would just add is don’t be naive.

Right. Train… don’t have the conversation with grandma and grandpa as well.

Have the conversation, talk about AI, because it’s not going away. It’s going to be a part of everybody’s life moving forward, and it’s… it’s… you know, I like to tell a joke, and it’s just… it’s because it makes the point. You know, I was telling my wife a joke in the kitchen, I laughed, she laughed, the toaster laughed, I shot the toaster.

Yeah. You know?

We don’t think about the fact that everything’s listening all the time. If you haven’t had your ads change.

Because you spoke about buying a boat, and all of a sudden, everything on Facebook, everything on Google, all the ads you see are now about boats?

something’s listening, something heard that, and now advertisers are using that information. You gotta make… you gotta… people need to be aware. And I think naivety, ignorance, is the danger here. Make them aware.

So they know what’s going on, and they can be sensitive to it.

Brian Jackson: Yeah, and I think it’s important, I always tell clients this in my presentations on AI, you know, AI’s, you know, very confident in what it tells you. It also wants to please you. So, just be aware of those two characteristics of these AI tools. So, any,

Any, thoughts on wrapping up this morning? Any closing… any closing thoughts you guys want to share with the group?

Jonathan Perz: You know, I’ll go first, Lauren, real quick, you know.

Don’t take cybersecurity for granted.

It’s… yeah, it costs… there’s a cost associated with it, but I know… and I know some companies just can’t justify the cost at this time, but any cybersecurity is better than no cybersecurity. Cybersecurity, at the end of the day, is risk reduction, and it’s a business decision.

reduce your risk. Even if you reduce it by a little bit, that decreases the opportunity and possibility of you being compromised, even a little bit over time.

Is better than nothing at all, but whatever you do.

plan for it. Get it on your agenda. Have a roadmap for it. Don’t just ignore it. It’s a big deal, it matters. Universally, anybody who kicks that can down the road is just waiting to be attacked.

And then you’re gonna let the hackers dictate your cybersecurity, expense budget, rather than you.

Plan for it, and then move forward on those plans. Do it over time. A little bit at a time makes a big difference.

Lauren Pankey: Yeah, and I’ll just add, too, it’s…bfor AI governance, it’s okay to be skeptical of it. You know, it’s okay to be skeptical of, you know, the output of AI, but I would say get ahead of the AI governance. Most of your employees are probably using it.

So, you know, get ahead of it, put a policy in place, you know, train your employees, and…

And, and give them the assurance that, you know, they can use it, but it’ll be used this way, and we’ll have a policy, and continuously train those employees, on the AI.

Brian Jackson: Well, thank you both for your contribution to the webinar this morning. Thank you all who have attended this morning. We appreciate your attendance and your support of EMSS and the family of companies. The webinar, as seen, will be available on our website in recorded fashion for if you want to go back and reference it. If you have any specific questions or anything we might can help you with, please feel free to reach out to us. We’ll be happy to answer any additional questions or help you with your AI journey. I think it’s important to recognize, you know, artificial intelligence is a technology that we all want to take advantage of, and that we all see great value in, but I think we also have to realize that there are risks that come with deploying that technology, as with any technology that we have in our organization. So, thank you again for attending this morning. Hope you all have a great rest of your day, and we’ll see you next time.

Transform
Technology Solutions