AI is transforming every corner of business, from forecasting demand to exposing new cybersecurity risks, especially in operational industries like manufacturing. According to the Epicor 2025 Supply Chain Agility Index, 56 % of organizations now report high AI readiness, and over 90 % are actively hiring for AI roles.

As organizations embed AI deeper into production, quality control, and decision-making, the attack surface is growing fast. Threat actors are already using AI to supercharge phishing, automate intrusions, and exploit weak points across the supply chain.

This is the final post in our Cybersecurity Awareness Month series. In Part 1, we looked at ransomware on the factory floor. In Part 2, we followed the risk across supply chains. Now, we arrive at the digital core: AI itself—the intelligence layer guiding modern operations.

In 2025, AI is no longer an aspirational add-on; it’s the operational brain. And like any brain, it can be manipulated. The question businesses need to consider isn’t whether AI brings new risks, but how prepared you are to defend against them.

This isn’t about discouraging AI adoption, quite the opposite. AI is a critical enabler of productivity, innovation, and competitiveness for all industries. But like any transformative technology, it must be done with the right safeguards in place to make it secure, responsible, and scalable.

The Dual Nature of AI: Power and Vulnerability

AI widens the attack surface. As the IBM X‑Force 2025 Threat Intelligence Index notes, generative AI frameworks introduce new classes of vulnerabilities, including remote code execution flaws within AI agent frameworks.

Security teams know this. According to Trend Micro’s 1H 2025 State of AI Security report, 93% of security leaders expect daily AI attacks this year.

Meanwhile, manufacturing-specific forecasts are already sounding alarms: LevelBlue’s 2025 survey shows only 32% of manufacturers feel prepared for AI-driven threats, even though 44% expect them.

So, AI is both a tool and a target. Attackers are evolving, and so must we.

Threats Across the AI Lifecycle

These are the most common AI-specific attack vectors we’re seeing across industry environments today. The good news: Understanding the vulnerable categories is often enough to know where to harden your defenses.

Phase

Threat Type

Examples

Data and training

Data poisoning

Corrupting training data so the model “learns” bad behavior (e.g., ignoring real defects).

Model supply chain

Trojan/backdoored models

Hidden triggers in pre-trained models that can be activated later.

Prompt/input

Injection and jailbreaks

Malicious inputs override safeguards or expose sensitive info.

Model behavior

Embedded manipulation

Hidden content or instructions embedded in outputs to influence actions.

Runtime

Polymorphic AI malware

Malware that uses AI to constantly change and evade detection.

Deployment and agents

Agent vulnerabilities

Attackers insert backdoors through weak or exposed AI agent frameworks.


These are not abstractions. They are real, ongoing developments. Anthropic, for example, has publicly reported thwarting repeated attempts to use its Claude AI to generate phishing, malicious code, and bypass safety filters.

Why AI Risks Are Especially Critical in Operational Environments

  • Physical and Operational Consequences: A manipulated AI model can mis-predict anomalies, schedule the wrong maintenance, or trigger false alerts leading to operational disruption, equipment damage, or worse.
  • Tight Model-to-Machine Integration: In many operational environments, from smart factories to logistics hubs, AI outputs feed directly into control systems or robotics. That bridge becomes an attack surface.
  • Compound Threats: Attackers themselves are leveraging AI to scale attacks, automate lateral movement, and spoof communications.
  • Rising Adoption = Rising Exposure: With AI rapidly expanding in supply chains, particularly among mid-sized manufacturers and distributors, the window to secure systems is shrinking.
  • Reputational and Legal Fallout: Bad AI outcomes including biased decisions, manipulated ordering, and safety failures can expose firms to compliance, legal, and reputational consequences.

The Epicor Approach and How We’re Designing for Safe AI

At Epicor, we believe that “real AI for real industries” means intelligence built with security, governance and industrial context in mind. Our published AI Code of Conduct lays out how we ensure fairness, explainability, lifecycle security and traceability in every model and deployment:

  • AI modules are deployed through segmented, controlled interfaces, not unrestricted agents.
  • Guardrail and explainability layers allow users to audit and understand suggestions.
  • Our model supply chain is tightly curated with vulnerability scanning and partner vetting.
  • Continuous monitoring and anomaly detection occur at every layer.

Epicor’s AI strategy reflects a simple truth: adopting AI shouldn’t require trading off security. Here’s how you can apply the same principles within your own environment.  

A Midmarket Playbook: Securing AI (without overengineering)

For many organizations across the supply chain, the barriers to AI adoption are less about ambition and more about execution. In fact, 40.9 % cite system integration and 40.1 % cite lack of expertise as top challenges. That’s why security needs to be embedded, not an afterthought.

Here is a practical, scalable framework for businesses with limited IT staff integrating AI:

1. Secure the Data Foundation

Action: Harden your data pipelines and control access from the ground up.

  • Vet data sources and models for vulnerabilities.
  • Segment sensitive data.
  • Maintain lineage, version control, and traceability.

Research from Futurum highlights that clean, governed data, with role-based access, encryption, and audit logging is essential to operationalizing AI securely.

2. Vet Models and Supply Chains

Action: Trust nothing blindly. Validate every model and dependency.

  • Demand SBOMs, origin details, and integrity checks from model providers.
  • Screen third-party/open-source models and libraries for hidden risks.
  • Use model-integrity checks or equivalent security testing to detect hidden trojans.

3. Harden Access and Runtime

Action: Control who and what can influence your models.

  • Apply least-privilege access to agents and model weights.
  • Guard runtime. Validate outputs for anomalies.
  • Watch for prompt injection and log interactions.

4. Monitor Models Continuously

Action: Watch for drift, bias, or anomalies in real time.

  • Detect drift, bias, or unexpected output changes.
  • Set alerts for divergence and anomalies.

5. Red-Team and Test Often

Action: Stress-test your AI layer like an attacker would.

  • Run adversarial drills simulating prompt injection, poisoning, or agent exploits.
  • Make testing part of your security rhythm.

6. Architecture and Isolation

Action: Keep your AI stack segmented and fallback-ready.

  • Isolate AI systems from core operations.
  • Use shadow models or read-only inference as backups.

7. Governance with Explainability

Action: Build trust through transparency and accountability.

  • Log decisions and maintain transparency.
  • Require human review for sensitive or high-stakes decisions.

Securing What’s Next

AI is not magic, but its risk profile is already evolving faster than many organizations’ capabilities. And because most companies expect measurable AI ROI within 6–18 months, securing your systems can’t wait for a future “maturity stage.” It has to happen in parallel.

As you embark on or scale AI in your operations:

  • Start with strong foundations (data, access, segmentation).
  • Test relentlessly under adversarial conditions.
  • Monitor continuously, not just during deployment.
  • Demand transparency and explainability.
  • And adopt this mindset: Your AI can be a vector, not just a tool.

As business across the supply chain, from manufacturers to distributors and down the line, embed intelligence across their operations, success will hinge on more than just performance. It will depend on trust. Trust in the data. Trust in the models. Trust in your supply chain. Trust in your ability to detect, respond, and recover.

This post wraps up our Cybersecurity Awareness Month blog series from protecting the factory floor, to securing the digital supply chain, to defending the AI layer that drives modern manufacturing. If there's one takeaway, it’s this: Cybersecurity isn’t a cost center. It’s the foundation of operational resilience.

Because in a world where threats evolve as fast as technology, the businesses that thrive will be the ones who build security into everything they do, from shop floor to smart core.

Learn how Epicor AI solutions can help your business thrive.

Dan Houdek
Sr. Director of Product Marketing

Dan Houdek helps organizations build lasting relationships with customers and partners, driving revenue and market share. With experience in marketing, sales, and operations, Dan has successfully led initiatives for top brands like Dell, Microsoft, and AMD, delivering impactful marketing strategies and innovative technology solutions. 

Read More by Dan Houdek