Cybersecurity in the Age of Artificial Intelligence

Cybersecurity in the Age of Artificial Intelligence

Vozniak NazarOctober 28, 2025
Share:

Cybersecurity in the Age of Artificial Intelligence

This material is based on the lecture “Cybersecurity in the Age of Artificial Intelligence”, delivered to students of the Faculty of Finance and Business Management at Ivan Franko National University of Lviv.

Introduction

Cybersecurity today is no longer a narrow technical topic — it’s part of our daily lives. In 2025, it covers everything connected to the Internet: smartphones, laptops, smartwatches, security cameras, banking services, even household appliances.

Cyber threats are no longer limited to governments or big corporations — each of us is a potential target.

Why does it matter? Data has become a commodity — it’s bought, sold, and stolen. Your photos, chats, banking passwords, or even your shopping history all have value for attackers. Losing them can mean not only financial damage but also reputational or even physical risks.

Where We Encounter Cybersecurity?

  • At home: phishing SMS, suspicious app updates, or open Wi-Fi networks.

  • At work: corporate emails, Google Docs, or accounting software that can serve as entry points for attacks.

  • In finance: online payments, banking apps, crypto wallets — all need multilayer protection.

  • On social media: every hacked account becomes a tool for spreading misinformation or scams.

  • In government services: from Diia to online registries containing personal data of millions of Ukrainians.

In Ukraine, the topic is particularly urgent. During wartime, the cyber front has become an extension of the real battlefield. Russian intelligence services attack state systems, energy infrastructure, spread fake messages via hacked accounts, and deploy malware that paralyzes companies. Ordinary citizens are often targets — their phones, bank apps, or emails can become doorways for attackers to infiltrate critical systems.

Real-World Damage Examples

  1. Government registries and “Diia” outages – December 19, 2024: a large cyberattack on the Ministry of Justice registries caused temporary service suspension and partial Diia failure.

  2. Telecom disruptions – December 12, 2023: a massive hack against Kyivstar disrupted mobile networks, internet, ATMs, transport payment systems, and even air-raid alerts.

  3. Energy infrastructure – attacks on power plants and grids led to blackouts and heating outages (notably in 2015–2016, affecting up to 20 % of Kyiv residents).

  4. Economic damage – the NotPetya attack, originally targeting Ukrainian firms, caused about $10 billion in global losses and roughly $560 million in Ukraine.

The consequences extend beyond money — cyberattacks erode trust, damage infrastructure, and amplify disinformation that weakens public morale.

There is a common stereotype that cybersecurity is exclusively the domain of IT professionals.
In reality, things are quite different: most successful cyberattacks don’t rely on sophisticated hacking tools — they happen because of ordinary human mistakes.

Every day we make small choices: whether to open a suspicious message, trust a “bank” call, or reuse the same password across different services.

How can breaches happen?

Example 1 — The Generator
You buy a new diesel generator to keep the lights on during power outages.
To control it, you install a mobile app that connects to Wi-Fi. Convenient? Yes.
But at the same time, you’ve effectively opened a door for hackers into your home network.

Example 2 — Surveillance Cameras
Someone installs video surveillance cameras near their house to feel safer.
They’re connected to the internet, allowing remote viewing from a phone.
However, if you didn’t change the default password, that same live feed might be watched by someone in Russia.

Example 3 — “Diia” and Banking Apps
A user logs into Diia or a banking app — as routinely as checking the weather.
But a single phishing email or fake login page can be enough for their personal data to fall into the hands of hostile intelligence services.

This is the Ukrainian reality: everyone must understand that a cyberattack can happen anytime. Without basic cyber hygiene — strong passwords, two-factor authentication, and attention to suspicious links — we leave the door open for the enemy, who looks for weak points not only in the army or state systems but also in our personal devices.

Just as we wash our hands to avoid infection or fasten our seatbelt to stay safe in an accident, we should take the same care with our digital habits.

Today, with access to tools like ChatGPT or Gemini, you can literally “ask artificial intelligence” whether a message looks suspicious. It doesn’t replace critical thinking — but it often helps you notice what you might have missed.

How AI Is Already Impacting Cybersecurity

Artificial intelligence has become one of the most influential forces in modern cybersecurity — simultaneously a threat and a defense.

On the attack side:
Fraudulent emails now look far more convincing than they did a few years ago.
Algorithms generate messages without the spelling mistakes typical of “classic” phishing attempts.

We now see forged audio and video where a “familiar voice” asks you to urgently transfer money, or a “video address” that appears completely real.

On the defense side:
The very same technologies help companies analyze billions of network events in real time.

AI can detect unusual activity in data streams faster than a human can — and stop an attack before it becomes dangerous.

To understand just how serious these risks can be, let’s look at the main threats that every user faces in everyday life.

The Main Cyber Threats Today

Let’s start with something simple: most cyberattacks don’t look like what we see in Hollywood movies — a lone hacker in a dark bunker stealing top-secret data.
In reality, they’re made up of small manipulations and human mistakes.

Below are the most common scenarios — with a short note on how to protect yourself.

1. Phishing and Social Engineering

Most breaches begin with a message. Social engineering is an attack on your mind — on your emotional reaction. It plays on panic, empathy, or the desire to help.

Common scenarios include:

  • Telegram / Viber / Social media – “Please vote”
    A message from a friend: “My 8-year-old niece (or a friend’s daughter) is in a contest — could you please vote?” You imagine a child, click the link, log in via Google or enter payment info — and your account or money is gone.
    The message likely came from a hacked account, so it looks completely legitimate.

Scammers often use classic bait phrases such as:

  • The UN is giving out funds” – appeals to authority.

  • It really works!” – fake social proof, makes it seem safe.

  • Got mine in 2 hours!” – urgency + false guarantee of reward.

How to spot them:

  1. Unverified authority — “UN / government / bank is giving money” with no official source. Always check the real website, not the email link.

  2. Too good to be true — instant payouts or “get it in 2 hours” are red flags.

  3. Social proof as manipulation — “a thousand people already got it” is meaningless without verified reviews.

  4. Suspicious links — hover to check the domain; small differences often hide fakes.

  5. Time pressure — words like “urgent” or “within 24 hours” are designed to disable critical thinking.

Other examples:

  • Email: “Your account has been blocked.” Fake “bank” link demanding login.

  • Work chats: fake colleague profile asking for document access or payment.

Tips:

  • Verify the sender in a separate chat or by phone.

  • Never log in through a received link — open the site manually.

  • Pause when you see emotional triggers (child help, blocked account).

2. Data Breaches and Password Reuse

A small shop where you once registered may leak its user database — your email and password end up online.

If you reuse that password elsewhere, attackers gain access to many of your accounts.

Tips:

  • Use a password manager and unique passwords for every important service.

  • Regularly check if your data has been leaked (there are free online tools) and change passwords promptly.

3. Weak Passwords and Human Error

“123456” and “qwerty” are still among the most common passwords.
Even complex ones won’t help if you fall for a phishing site or use unsecured public Wi-Fi.

Tips:

  • Always combine strong passwords with two-factor authentication (2FA).

  • Avoid logging in through third-party sites without checking the URL.

4. Cloud Vulnerabilities and Access Misconfigurations

Cloud services let you access files from anywhere — but one wrong permission can expose them publicly.

Attacks often start from a single compromised account and spread “horizontally” across corporate systems.

Example:
An employee accidentally sets a shared folder to “Anyone with the link can view,” making critical files appear in search results.

Tips:

  • Review sharing permissions in Google Drive, OneDrive, etc.

  • Separate work and personal accounts.

5. Automated Attacks, Bots, and Scripts

Attackers scale up using bots that simultaneously scan thousands of websites, send phishing emails, and guess passwords.

Tips:

  • Keep all software updated — patches close known vulnerabilities.

  • Limit login attempts and enable basic website protection tools.

6. New Threat Scenarios: SIM-Swap, Fake Apps, and QR Scams

A few other realistic risks:

  • SIM-swap: a criminal reissues your SIM card to intercept SMS codes for 2FA.

  • Fake apps: look like real ones in app stores but steal data.

  • QR scams: stickers with fake promotions lead to phishing sites.

Tips:

  • Use code-generator apps (Authenticator) or hardware keys instead of SMS for 2FA.

  • Download apps only from official sources and check reviews.

  • Before scanning a QR code, check the context — who placed it and why.

Threats have become closer and more sophisticated — they disguise themselves as something familiar and exploit our emotions and automation habits.

Now that we understand what we face every day, let’s explore how artificial intelligence amplifies these risks — and at the same time becomes one of our strongest tools for defense.

AI as a Tool for Attackers

Artificial intelligence brings both new opportunities and new threats. Hackers now have tools that make their operations faster, more scalable, and far more convincing. Let’s look at how this works — and what can be done about it.

1. Personalized Phishing and Automation

In the past, phishing was easy to spot: broken grammar, strange sender addresses, and awkward phrasing.

Today, AI can write an email or message in the exact tone and style of a specific person, using public data — posts, profiles, and comments.

By collecting such open information, AI can build a psychological profile: your habits, friends, writing style, and interests. Then it generates personalized messages that feel completely natural to the victim.

A powerful mix of personalization, voice cloning, and automation makes this type of attack incredibly effective.

Typical scenario:

  1. An algorithm scrapes your and your friends’ public posts and comments.

  2. It crafts a realistic message mimicking your friend’s tone, mentioning a recent event (e.g., “I saw your post about the volunteer fundraiser”).

  3. After you click the link, you receive a voice message — a cloned version of your friend’s voice urgently asking you to transfer money for “tickets, medicine, or a fine.”

  4. The same message is sent automatically to hundreds of contacts — looking “alive” and authentic.

Why it works:
It simultaneously hits all three pillars of trust:

  • Content (personalized message)

  • Source (familiar voice)

  • Context (real event)

Automation gives it scale — not one victim, but thousands.

Example:

“Hey [Name], saw your photo from yesterday’s volunteer drive. Could you please vote for our project? It closes tomorrow!”

The link looks perfectly legitimate — but it’s phishing.

Tip: Limit how much personal data you share publicly, review your social media privacy settings, and avoid replying to unfamiliar requests in comments.

2. Voice Cloning — The “Friend’s Call”

Voice cloning tools can recreate a person’s tone with uncanny precision.

Example: you get a voice message or call from a “friend” sounding anxious — “Hey, can you send me 5,000 hryvnias right now? I’m in trouble — I’ll pay you back.”

You recognize the voice and instinctively want to help.

⚠️ Key takeaway: Never trust the voice alone. Always verify through another channel — message your friend in a known chat, call their other number, or confirm in person.
Such scams rely on emotional urgency to bypass rational thinking.

3. Deepfake Videos and Visual Manipulation

Deepfake videos can show people saying things they never said.
They’re used for blackmail, defamation, or fake “proof” designed to provoke action.

At the corporate level — reputational or financial damage;
At the personal level — extortion or humiliation.

Example: The “CEO Deepfake Call”
A British engineering firm, Arup, confirmed a major financial loss after a deepfake scam.
An accountant received a video call from what looked and sounded like the company’s director, instructing them to transfer funds to a “new partner account.”
It looked authentic — but it was fake.

How to respond:

  • Verify the video’s source via official channels.

  • Check metadata and whether the video appeared simultaneously on multiple official accounts.

  • Contact the person directly before taking action.

4. AI-Generated Malware and Polymorphic Variants

Cybercriminals now use the same AI models we do — not for writing essays, but for writing malware.

AI can instantly generate or modify malicious code, saving hackers weeks of effort. One major innovation is polymorphism — each new copy of the malware looks slightly different from the last, like a digital chameleon.

For antivirus tools, every variation looks like a new file — often slipping past detection.

Defense:
Use behavior-based security systems (EDR/XDR) that monitor what a program does, not just how it looks.

If, for example, Word suddenly starts deleting files or connecting to strange servers — the system immediately raises an alert.

5. Attacks on AI Models (Data Poisoning & Prompt Injection)

AI systems “learn” from data — just like students from lectures.
If attackers insert misleading examples during training, the model learns the wrong patterns — this is data poisoning.

Another danger is model extraction, where attackers manipulate the AI with crafted prompts to reveal sensitive information from its “memory” — such as private training data.

How to mitigate:

  • Validate and sanitize all datasets before training.

  • Apply filters and human review during model input and retraining.

  • Restrict who can access or query the model.

  • Use privacy-preserving techniques to prevent data leaks.

6. Other Risks: API Keys and Secret Leaks

API keys are like digital passes — credentials that let software systems communicate (e.g., your website connecting to a payment service). If such a key is exposed, it’s like losing the key to your apartment.

What can happen:

  • Attackers use your credentials to run up huge server bills.

  • Extract confidential data.

  • Access your internal services.

Prevention:

  • Use scanners to detect exposed secrets in repositories (like GitHub).

  • Apply the principle of least privilege — limit what each key can do.

  • Rotate keys regularly, just like changing locks.

  • Store secrets only in secure vaults or secret managers.

AI gives power to both defenders and attackers. Our goal is to think one step ahead — not only patch existing holes, but anticipate which tools adversaries might use tomorrow.

Next, let’s explore how AI can also detect attacks and strengthen our defenses.

AI as a Defender

Let’s talk about how AI actually protects us.

1. Detecting Suspicious Behavior — “the watchful guard”

Imagine a guard who remembers who normally enters, from which door, and at what time.
That “guard” is an AI system trained on normal account behavior.

It knows, for example: Olena usually opens certain folders from Kyiv between 09:00 and 18:00. If the system suddenly sees a login at 03:00 from another country, or a mass download of documents, it raises an alert.

In practice this looks like: Olena (or the admin) gets a warning, the session is temporarily blocked, and the security team starts investigating.

For a regular user the rule is simple:
If you get a “suspicious login/activity” notification — do not ignore it.
That single warning often stops real damage.

2. Fast Response — automatic first steps

When something looks dangerous, AI can take safe automatic actions before a human even reacts. For example, it can:

  • temporarily limit access,

  • ask for extra identity verification,

  • isolate a suspicious device from the network.

Picture this: someone tries to log in to your account. Before the attacker can do anything, the system asks you to confirm the login through an app on your phone.
That buys time to check if it’s really you.

Important: automation doesn’t replace humans.
It just reduces damage fast and gives humans time to think.
So if the system asks you to confirm something — don’t just click “later”, actually check it.

3. Fighting Phishing — “the mail filter”

Modern phishing emails can look completely professional.

Here AI helps by analyzing not only the visible text, but also hidden signals:

  • the real sender address,

  • where the links actually lead,

  • whether the domain matches the supposed organization.

If something doesn’t match, the email gets quarantined or marked with a warning.

Example: you get an email “from your bank” asking you to update your info.
The filter flags the domain as suspicious and adds a red warning.

Your job:

  • don’t click the link in that email;

  • open the bank app or website yourself.

Also: if your email client warns you — read the warning. It’s not “just spam”, it might be an active attempt to steal your money.

4. Voice and Video Checking — extra verification

Deepfake audio and video can look (and sound) very real now.
AI systems are being trained to notice tiny artifacts: wrong lip sync, weird lighting, unnatural pauses in voice, audio frequency glitches, editing traces.

That’s not a perfect guarantee, but it’s one more signal. If a platform or service labels something as “possibly manipulated,” treat it carefully.

Practical rule:
If you get an urgent voice message asking for money — before sending anything, call that person on their known number or write to them in your usual chat. This one habit saves people a lot of money.

5. AI supports humans — less manual noise

When an incident happens, security teams get buried in logs, alerts, IP addresses, file changes, login attempts. A human cannot manually review all of that in time.

AI can:

  • highlight which actions were taken,

  • which files were changed,

  • which IPs triggered alarms,

  • and suggest a likely timeline.

So instead of spending a full day digging, the analyst gets a clear starting point in minutes.
For an organization, that means faster reaction and lower losses.

But: AI output is a hint, not a verdict.
A human still has to confirm before doing anything irreversible (blocking accounts, shutting systems down, etc.).

6. Device Protection — stopping the attack locally

If malicious code lands on a computer, it’s critical to stop it before it spreads across the network.

Behavior-based protection systems watch what programs actually do.
If something suddenly:

  • starts encrypting lots of files, or

  • tries to contact strange servers,

the system can automatically kill that process or isolate the whole device from the network.

For you this means:

  • keep your protection up to date,

  • use antivirus / endpoint protection,

  • keep backups.

If something goes wrong, you can recover faster. These systems act very quickly — but sometimes they will still ask an admin to approve the final action.

Practical Guidance for Users

Now, a few concrete things you can already do today.
This is just digital hygiene. Nothing magical, but it works.

1. Basic hygiene — passwords and multi-factor authentication

Foundation rules:

  • A unique password for every important service.

  • MFA (multi-factor authentication) everywhere you can.

How to do it, step by step:

  1. Install a password manager.
    It stores all your passwords in an encrypted vault and can generate long, random passwords for you. You only need to remember one thing: your master password.

  2. Choose a strong master password.
    Use a long passphrase of 4–6 words, unique, ideally with a fragment only you would know. Turn on biometrics or a PIN on your phone for quick access.

  3. Turn on MFA for critical accounts:
    email, bank, socials, cloud storage.
    Best options: an authenticator app (Google Authenticator, Authy, etc.) or a hardware key (like YubiKey). SMS is the weakest option, because SIM-swap attacks let criminals steal your text codes.

  4. Store backup codes somewhere safe (not “notes.txt” on your desktop).
    Put them in a secure note inside the password manager or somewhere physically safe.

Simple habit phrase you can use:

“No, I don’t share passwords. I can give you temporary access through the password manager or you can send an official request.”

That alone filters out a lot of social engineering.

2. Verifying sources — spotting deepfakes and fakes

Here’s the mental checklist you should run any time something feels off:

What to check if you’re suspicious:

  1. Context. Where did this message come from? Is there official confirmation on the organization’s real site or channel?

  2. Second channel. If a “friend” asks for money via voice message, call them on the number you already trust, or write in your usual chat.
    Say: “I’ll call you on your number to confirm.”

  3. Technical signs.

    • In video: weird lip sync, strange lighting, unnatural eye movement, sudden artifacts.

    • In voice: oddly smooth tone, no natural breathing pauses.
      If something feels “off,” treat it as a warning.

  4. Image check. If you get a screenshot or photo “as proof,” do a reverse image search or check official sources.

  5. Don’t forward instantly. Train yourself to pause ~30 seconds before you send money or forward “urgent news” further.

Example habit:
You get a voice message from a “friend”: “I urgently need money.”
Answer in your normal chat: “I’m calling you right now on your number.”
If it’s fake, the attacker usually disappears. If it’s real, your friend will just pick up.

3. Using AI tools for your own safety

Now let’s talk about tools that work for you.

There are two main categories here:

  • next-generation device protection,

  • leak monitoring.

Modern endpoint security / antivirus:
Look for solutions that don’t just rely on old-school signature matching, but also monitor behavior. If some app suddenly starts encrypting half your files, it can be stopped before ransomware spreads. Regular OS and app updates are mandatory — not optional.

Leak monitoring:
There are services that alert you if your email or password shows up in a known data breach. Connect those alerts to your main email so you’ll know fast, and change the compromised password immediately.

Browser filters and extensions:
Use browser add-ons that warn you about suspicious websites or lookalike domains.
They’re not perfect, but they reduce the chance you’ll land on a phishing page just by clicking too fast.

Backups:
Make routine backups of important documents to external storage or a secured cloud.
This protects you from ransomware and from your own mistakes.

Ask whoever maintains your laptop/phone to set up:

  • automatic updates,

  • modern protection,

  • and backup.

That alone already puts you ahead of most victims.

Password leak check:

  • HaveIBeenPwned (HIBP) or Firefox Monitor: enter your email and see if it’s in known breaches.
    If yes — change that password immediately and turn on MFA.

  • In Ukraine there are Telegram bots that offer “leak check,” but treat them carefully.
    The safer option is to use reputable, well-known services, not random bots.

AI assistants:
Tools like ChatGPT or Gemini can help you analyze a suspicious email.
You can paste the text and ask: “Does this look like phishing? What is off about it?”
They can also generate a checklist of what to verify before you click.

4. If your financial data leaked — step-by-step

How do you know your data is compromised?

  • Sudden charges or weird card payments.

  • SMS / push notifications confirming transactions you didn’t make.

  • Calls or emails from a “bank security service” you never contacted.

  • Alerts saying your data appeared in a leak.

What to do in Ukraine:

  1. Call your bank immediately (hotline or app) and ask them to block the card/account.
    Most major Ukrainian banks (PrivatBank, monobank, Oschadbank, Raiffeisen, etc.) can do this instantly.

  2. Dispute the transactions.
    Banks in Ukraine allow you to file for reversal / chargeback (Visa/Mastercard rules).

  3. Save evidence.
    Screenshots, SMS, statements — you’ll need them for the bank and for cyber police.

  4. Change passwords and enable MFA, especially for banking apps and email.

  5. Report the incident to the Cyber Police of Ukraine (cyberpolice.gov.ua). They have an online complaint form for cybercrime.

  6. If needed, freeze your credit history.
    In Ukraine you can request a freeze from a credit bureau so nobody can take out a loan in your name.

All of this sounds “basic,” but this basic layer is exactly what blocks most attacks.
You don’t have to be a security engineer.
You just have to build a few habits and actually react when you see a red flag.

A Look Into the Future

We’ve talked about today’s threats and defenses — but what lies ahead may be even more fascinating.

AI is evolving so quickly that within just two or three years, we might see an entirely new generation of both cyberattacks and protection systems.

Attacks of the Future

Autonomous malicious agents.
Imagine software that doesn’t just execute hacker commands but acts on its own — finding vulnerabilities, combining exploits, evading defenses, and learning as it goes.
This isn’t science fiction — early prototypes of such agents already exist.

Attacks built on trust simulation.
If today’s deepfakes are only about fake videos or voices, tomorrow they could be full virtual avatars of your coworkers — lifelike replicas inside Teams or Zoom that look, move, and speak exactly like the real person.

Behavioral-data weaponization.
Hackers may no longer need your password — they might steal your behavioral patterns: how you type, your writing style, your decision flow.
With that, they can impersonate you in emails, chats, and even negotiations.

AI Guardians

We are entering an era where companies may entrust AI security agents to watch over systems — programs that:

  • monitor traffic 24/7,

  • detect and block attacks in real time,

  • and even negotiate or counteract malicious AI agents automatically.

But this raises a question: Are we ready to surrender that much control?

What happens if an AI decides to block a “suspicious” employee?
Will we accept the judgment of a machine without human verification?

We’re heading toward a world where the struggle will no longer be “human vs hacker,” but “AI vs AI.” The real question is whether we’re prepared — not just to defend ourselves from intelligent systems, but to collaborate with them.

The future of cybersecurity isn’t just about new technologies — it’s about a new mindset.
Human + AI as allies.

The faster we learn to work with artificial intelligence — to trust it, guide it, and oversee it — the greater our chances of winning this technological race.

Share:
Built with v0